Thank you, Ministers Hoekstra and Park, for inviting us to this Summit and for the opportunity to deliver these remarks. We are grateful. We commend the Netherlands and the Government of Korea for launching this initiative on the responsible development and use of artificial intelligence in the military domain, and for all the effort you have put into ensuring that this Summit is a success.
President Biden stated during the U.S. State of the Union Address last week that “we are not bystanders to history.” Those of us in this room, and the organizations we represent, are empowered – dare I say responsible – to ensure advancements in our institutions and international norms keep pace with advancements in technology. I’ll be the first to admit, this is easier said than done. We are not bystanders to history. Our actions will write our collective history.
Artificial intelligence is a transformational, general-purpose technology that has altered our human ambitions and insights in positive ways. The opportunities and risks of harnessing the power of this novel technology for economies and societies, not just militaries, will alter the course of history in ways unknown. For example, applications of AI could enhance arms control by helping us solve complex verification challenges and increase confidence in states’ commitments. As Secretary Blinken said, “we want to ensure that as AI and related technologies transform how we live, how we work, how we compete, how we defend ourselves, that we’re staying ahead of change, indeed that we are shaping change and, critically, making sure it delivers for our people.” This is not just important for the United States; it is important for the world.
As we have discussed these past few days, advancements in this technology will fundamentally alter militaries around the world, large and small. We already see the way that artificial intelligence could shape the future of militaries in Ukraine. Every day in Ukraine, we see uses of AI, from how Ukrainian soldiers process data and analyze the situation on the battlefield, to how Ukraine’s population accesses the Internet and the broader world.
The United States approaches using artificial intelligence for military purposes from the perspective of responsible speed, meaning our attempts harness the benefits of AI have to be accompanied by a focus on safe and responsible behavior that is consistent with the law of war and international humanitarian law.
Inflection Point is a term used frequently as of late, but for good reason. Collectively, we have an opportunity to shape the way militaries develop and use AI. Should we use this technology to reduce civilian casualties in conflicts? Should we use this technology to avoid miscalculation? Of course, these are questions to which we will all agree on an answer. But there are also harder questions where we might not always agree. We have a unique opportunity to get ahead of the game and use this Summit as the first step towards creating strong norms of responsible behavior surrounding military uses of AI.
It is time to turn words into action. The United States is here today to announce a Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. We invite all states to join us in implementing international norms as it pertains to the military development and use of AI and autonomy. We are inspired by this Summit and believe our Declaration can be a focal point for international cooperation. It is a tangible representation of how this Summit can yield real benefits for international stability.
I want to tell you more about why this is so important, and how working together can enhance international stability.
AI will transform many things that militaries do – from managing logistics to how they train and operate. No doubt, this transformation can bring many benefits. Responsible militaries will use AI capabilities to improve decision-making and situational awareness, helping them avoid unintended engagements. AI capabilities will increase accuracy and precision in the use of force which will also help strengthen implementation of international humanitarian law’s protections for civilians. Military uses of AI can make the world safer.
However, due to the fragility of many AI systems, States that rush to harness AI without a careful, principled approach could deploy systems with unpredictable consequences – whether this is because the systems are poorly designed, inadequately tested, or commanders and operators do not possess an adequate understanding of the capabilities and limitations of those systems. The same is true across every sector. Safe and responsible norms are needed beyond military applications – such as education, banking, and law enforcement applications.
Approaching military applications of AI and autonomy from a responsible speed perspective means we must innovate quickly to take advantage of the rate of technological change and to harness the benefits of AI. But we must do this safely and responsibly. What does this mean in practice, given the risks that I have just laid out, and that many at this Summit have identified?
The United States has taken several strides to make the principles and practices that guide our approach to AI in the military domain transparent through DoD’s AI Ethical Principles and Responsible AI Strategy and Implementation Pathway, both of which are publicly available. The United States is committed to developing systems that are reliable, ensuring that operators can employ any weapon systems that use AI technologies with appropriate care and in accordance with applicable domestic and international law, and ensuring human accountability is retained. In addition, we are committed to ensuring that AI capabilities used during military operations are employed within a responsible human chain of command and control.
Additionally, the United States, United Kingdom and France already made a joint commitment to maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment. This existing statement spells out clearly what responsible states with us today undoubtedly agree on – let’s ensure this most dangerous of military technologies is firmly under human supervision.
We’re very proud of our work in this area, and it reflects our deep commitment to promoting transparency, predictability, and stability in the adoption of emerging military technologies. However, this is a shared challenge for the international community – action at the national level is not sufficient to address the risks. Our collective ability as responsible states to converge on an understanding of responsible state behavior can advance our collective security and demonstrate our collective leadership on the responsible use of military AI technology. This is a theme we have heard many times at this conference, and the United States stands firmly in agreement.
As a rapidly changing technology, we have an obligation to create strong norms of responsible behavior concerning military uses of AI, and in a way that keeps in mind that applications of AI by militaries will undoubtedly change in the coming years. This is why we are launching our Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy at the REAIM Summit. We have spoken to many countries about this initiative already, and are grateful for the support and engagement we have received so far. While this is an American document at present, it reflects what we believe are shared values that many responsible states hold.
This Declaration consists of voluntary guidelines describing best practices for use of AI in a military context. The aim of the Declaration is to establish standards for responsible behavior and put in place measures to increase transparency, communication, and reduce risks of inadvertent conflict and escalation. The norms described in the Declaration will provide a basis to share lessons learned from our experience to help guide states’ development and deployment of military AI. It includes commitments such as steps to:
- Ensure that relevant personnel exercise appropriate care, including appropriate levels of human judgment, in the development, deployment, and use of military AI capabilities;
- Engineer military AI capabilities so that they possess the ability to avoid unintended consequences and the ability disengage or deactivate deployed systems that demonstrate unintended behavior;
- Ensure that military AI capabilities are developed with auditable methodologies, data sources, design procedures, and documentation;
- Minimize unintended bias in military AI capabilities;
- And ensure compliance with international law, in particular, international humanitarian law;
These strong and transparent norms apply across military domains, regardless of a system’s functionality or scope of potential harm. Adherence to these principles would ensure military capabilities are reliable, and reduce the mutual risks posed by the irresponsible use of AI, including accidents and inadvertent escalation. They also preserve the right of self-defense and states’ ability to responsibly develop and use military AI. We are making the text of the Declaration available to review. We recognize that states vary widely in their approaches and internal deliberations on military AI policies. Still, we have an opportunity to work together to make a powerful international statement about responsible military uses of AI and autonomy.
We have put a lot of work and thought into the principles enumerated in this Declaration—and we welcome efforts to strengthen this document and after the Summit. We have talked with many states before and during the summit, and we are encouraged at what seems like an emerging shared commitment to responsible behavior.
We hope to get your feedback and partnership to turn words into action in ways that strengthen international norms and make it possible for all of the states here today to endorse this Declaration. This is the spirit of this Summit come to life in an outcome we should all be able to embrace. In the coming weeks, we hope to work with like-minded states to discuss implementation pathways which reinforce strong international norms.
We also hope that this Declaration is the beginning of a process. There is much work to be done to develop a shared understanding. We need further engagement and opportunities like the REAIM Summit to make the norms the Declaration promotes a reality, through follow-up dialogue, working conversations on implementation, and more.
We want to emphasize again that we are open to engagement with any country that is interested in joining us in taking this crucial first step toward building an international consensus on responsible application of AI and autonomy in the military domain.
We have a unique opportunity here, at the dawn of the military AI area, to set strong norms of responsible behavior. We want to take this step forward together with the international community.
Let me close and again thank you Ministers Hoekstra and Park for your countries’ work on these challenging issues. And thanks to everyone here for listening, and for your upcoming engagement. We look forward to further discussions at this Summit and continuing the dialogue on responsible AI in the military domain.
- Building Consensus on the U.S. Framework for a Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy