What Security Companies Need to Know About the New OMB Guidance on Use of AI by Federal Agencies
On March 28, 2024, the White House Office of Management and Budget (OMB) issued the final version of its memorandum (M-24-10) instructing federal agencies regarding implementation of the Biden administration’s Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence (AI).
Under M-24-20, agencies are instructed to designate chief AI officers (CAIOs), convene AI governance bodies, implement compliance plans and expand AI use case inventories. Among other provisions, a primary feature of the new policy is that agencies will be required to apply a “minimum baseline” of practices to manage risks of “rights-impacting AI” and “safety-impacting AI.” Importantly, the implementation deadline has been pushed back from Aug. 1 to Dec. 1, 2024, as many agencies are now just beginning to consider their implementation processes.
It is critical for government contractors and suppliers of products or services that include AI to become familiar with the guidance, because whatever new processes agencies determine are necessary will in many cases flow down to providers to assist with implementation. Under the new policy, for applicable AI this includes things like completing an impact assessment, public notice, evaluating design and training data, operational testing, post-deployment monitoring, operator training, public feedback mechanisms and other practices. Notably, rather than following a strict risk management approach, contractors should be aware there are some prescriptive elements to determining whether an activity involves a high-risk AI system and what procedures are required.
In December 2023, the Security Industry Association (SIA) provided comments on OMB’s draft guidance. Supporting its goals “to strengthen the effective and appropriate use of AI, advance AI innovation, and manage risks from AI in the Federal Government,” SIA also offered three additional recommendations to improve the guidance:
- Providing agencies with more flexibility to tailor risk mitigation in greater alignment with the National Institute of Standards and Technology Risk Management Framework
- Clarifying key definitions to ensure new requirements appropriately focus on applications that are truly high-risk
- Adjusting requirements related to developer data disclosure to shield IP and proprietary information and better protect data security
The final guidance addresses these issues in several ways.
For AI to be considered “rights impacting” or “safety-impacting,” it added to the criteria that the particular AI output must serve as a “principal basis for a decision or action” significantly impacting rights or safety, which is narrower than the draft memo. Additionally, for listed purposes that are presumed to be “safety-impacting” and “rights-impacting” AI, in the final guidance the qualifier was changed from used to “control or meaningfully influence the outcome” to “control or significantly influence the outcome” of certain agency activities or decisions.
It also includes new language clarifying that CAIOs have the flexibility to determine that a presumed safety-impacting or rights-impacting AI application does not meet those presumed definitions.
“Agencies must review each current or planned use of AI to assess whether it matches the definition of safety-impacting AI or rights-impacting AI. When conducting such an assessment, as reflected by the definitions of safety-impacting AI and rights-impacting AI in Section 6 of this memorandum, agencies must look to whether the particular AI output serves as a principal basis for a decision or action.”
Additionally, there is significant tailoring throughout the final document that clarifies and narrows requirements, including for minimum practices. In one example, regarding consulting feedback from affected groups, the draft directs agencies receiving “negative feedback” to consider “not deploying the AI or removing the from use.” Under the final document, in contrast, agencies are directed to “manage the risks of public consultation in contexts like fraud prevention and law enforcement investigations, where consulting with the targeted individual is impractical but consulting with a representative group may be appropriate” and in assessing feedback, should not use the AI only if it determines the “use of AI in a given context would cause more harm than good.”
In another example, the draft would require agencies to provide a mechanism to “conveniently opt out from AI functionality in favor of a human alternative where practicable,” while the final version clarifies, “Agencies are not required to provide the ability to opt-out if the AI functionality is solely used for the prevention, detection, and investigation of fraud or cybersecurity incidents, or the conduct of a criminal investigation.” It further states “CAIOs are additionally permitted to waive this opt-out requirement if they can demonstrate that a human alternative would result in a service that is less fair or if an opt-out would impose undue hardship on the agency.”
While in some respects the finalized policy is narrower, in other places some notable additions were made, particularly to the lists of presumed high-risk AI categories. Regarding the list of presumed “safety-impacting” AI uses, the following notable changes were made in the final document:
- The section referring to critical infrastructure was narrowed to apply only to AI uses that are “controlling the safety-critical functions” of critical infrastructure.
- Added to the list were “detecting the presence of dangerous weapons or a violent act,” and “choosing to summon first responders to an emergency.
Notably, the final document retains within the list of presumed “safety-impacting” AI uses, “Controlling access to or security of government facilities.”
Regarding the list of presumed “rights-impacting” AI uses, the following notable changes were made:
- In the law enforcement context, the list was expanded to include “identifying criminal suspects,” “detecting gunshots” and “conducting biometric identification.”
- Also added were “conducting biometric identification for one-to-many identification in publicly accessible spaces,” and “providing live language interpretation or translation, without a competent interpreter or translator present, for an interaction that directly informs an agency decision or action.”
The final document also includes a new subsection on “Responsible Procurement of AI for Biometric Identification,” which encourages agencies to do the following:
- Assessment of data used to train or operate the AI to ensure it was lawfully collected or used, and to ensure it is sufficiently accurate; and
- Requesting supporting documentation or test results to validate accuracy, reliability and validity of AI’s ability to match identities.
Again, further understanding OMB’s M-24-20 memorandum is important for federal contractors as they assist agency partners working to implement in the AI executive order. Particular attention should be paid to the full definitions of and presumed lists of safety-impacting and rights-impacting AI in the final document, as well as CAIO directives and implementation policies, as there could be some variation agency by agency.
SIA will continue to track developments and provide analysis related to implementation of the AI executive order, and we continue to welcome any member feedback. For further information, please contact SIA Senior Director of Government Relations Jake Parker at jparker@securityindustry.org.