By Lewis-Burke Associates, LLC
The Defense Innovation Board (DIB), an independent advisory committee tasked with considering how to improve the internal operations and culture of the Department of Defense (DOD), released a draft report of its long-anticipated recommendations for principles of artificial intelligence (AI) at DOD. The Board’s principles and subsequent recommendations are generally aligned with and reflect the issues and recommendations highlighted in HFES’s public comments that were submitted for DIB’s consideration in September.
DIB states that achieving technological and military advantage over adversaries is central, but not sufficient to DOD’s mission to protect the Nation, and that it is essential that DOD remains committed to its values in all initiatives and operations. The report highlights the global context for why developing a set of AI principles is important and timely, citing the current global race to develop AI, a growing contest between democratic and authoritarian norms, DOD’s efforts to build trust in AI both within and outside the Department, and its broad efforts to build stronger ties with academia and the private sector to accomplish its objectives as reasons to develop a set of principles for DOD’s use of AI. With this context in mind, DIB recommends that DOD adopt five principles in its use of AI:
- “Responsible: Human beings should exercise appropriate levels of judgement and remain responsible for the development, deployment, use, and outcomes of AI systems.
- Equitable: DOD should take deliberate steps to avoid unintended bias in the development and deployment of combat or non-combat AI systems that would inadvertently cause harm to persons.
- Traceable: DOD’s AI engineering discipline should be sufficiently advanced such that technical experts possess an appropriate understanding of the technology, development processes, and operational methods of its AI systems, including transparent and auditable methodologies, data sources, and design procedure and documentation.
- Reliable: AI systems should have an explicit, well-defined domain of use, and the safety, security, and robustness of such systems should be tested and assured across their entire life cycle within that domain of use.
- Governable: DOD AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption, and disengage or deactivate deployed systems that demonstrate unintended escalatory or other behavior.”
To ensure these principles are properly implemented, DIB recommends that DOD takes the following actions:
- Formalize DIB’s AI principles in DOD policies.
- Establish a steering committee of senior DOD officials to ensure that an AI strategy and AI principles are implemented.
- Cultivate and grow the field of AI engineering within DOD’s research enterprise.
- Enhance DOD training and workforce programs pertaining to AI for Service Members at all levels.
- Invest in research on novel security aspects of AI.
- Invest in research to bolster reproducibility of AI systems.
- Define benchmarks for measuring the reliability and performance of AI systems.
- Strengthen and create new test and evaluation techniques for AI systems.
- Through the Joint Artificial Intelligence Center (JAIC), develop a risk management methodology.
- Ensure, through the JAIC, proper implementation of AI ethics principles.
- Expand research investments on understanding how to implement AI ethics principles through a Multidisciplinary University Research Initiative (MURI).
- Direct the JAIC to convene an annual conference on AI safety, security, and robustness.
The DIB report noted that although a number of issues are already covered by DOD’s ethics frameworks and policies including its adherence to International Humanitarian Law (known as the Laws of War), the report address considerations unique to AI as an emerging technology that need to be addressed in DOD policies. In a recent post, JAIC Director, Air Force Lieutenant General John “Jack” Shanahan, noted that the DIB’s recommendations will help DOD uphold its commitment to ethics outlined in DOD’s AI strategy and intention to rigorously test and set standards for new military technologies.
Other highlights in the report relevant to HFES include:
- The Board highlights a number of factors that are unique to AI compared to other new technologies including the potential to augment or replace human judgement, safety challenges, the breadth of applications across a wide variety of sectors, uncertainty over the direction and progress of AI research & development (R&D), the private sector’s leadership in developing AI, the ease for non-state actors to access and proliferate AI systems, potential bias issues, and the potential for AI systems to make mistakes that a human would not make.
- If used properly, DIB notes that AI will help DOD better comply with and enforce the laws of war.
- Given DOD’s commitment to people, DIB states that AI systems must be designed, developed, and deployed in a human-centered manner, as humans will ultimately be responsible for the actions of an AI system.
- DIB advises that, despite DOD’s robust test and evaluation (T&E) and verification and validation (V&V) processes, the Department may have to come up with new ways to assure the reliability of AI systems across their lifespan, especially if these systems continue to learn after their deployment.
- The report notes that intuitive user-interfaces can increase situational awareness and lead to better and faster decision-making.
- DIB recommends steps to support enhancing the traceability of AI-systems, such as providing design methodologies and data sources to DOD stakeholders during system development, as well as conducting after-action reviews of systems to check for biases and other issues that may develop.
- DIB states in its conclusion that ethical, as well as other legal and policy considerations, cannot be “bolted on” after a system is built but must be integrated throughout the development of new AI systems.
HFES previously provided public comments to DIB to inform and shape these principles. The Society provided guidance on a number of issues DOD must address in adopting AI for military operations including limitations of AI systems, learning biases, human oversight of autonomous systems, human-AI interfaces, and the use of AI for decision support, among others. HFES also advocated that, while AI can potentially help DOD prevent unnecessary military and civilian casualties, HF/E must be integrated early on in order to make this possible.
Sources and Additional Information:
- A summary of DIB’s report, “AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense,” listing DIB’s principles and recommendations is available here. The comprehensive report can be found here.
- HFES’s comments to the Defense Innovation Board can be found here.
- More information on the Defense Innovation Board’s AI Principles Project can be found here.