
The White House Office of Management and Budget (OMB) released two memos directed to the heads of all executive branch departments and agencies establishing policies on federal AI use and purchase of AI in government.
The memos, which align with an Executive Order President Donald Trump signed in January, replace Biden’s previous guidance (M-24-10 and M-24-18) but maintain some of the same recommendations.
The first memo (M-25-21) directs agencies to “lessen the burden of bureaucratic restrictions and to build effective policies and processes for the timely deployment” of AI.
The memo directs Agencies to accelerate the use of AI by focusing on three priorities: innovation, governance and public trust.
It says Agencies must maximize existing investments by reusing data, models and code and prioritizing U.S.-developed AI products. It emphasizes the need for “robust risk management, particularly for ‘high-impact AI.'”
“As agencies integrate AI into critical decision-making processes, they’re being reminded that speed must not come at the expense of public trust or safety,” the memo says.
The memo defines “high-impact AI,” as “AI with an output that serves as a principal basis for decisions or actions with legal, material, binding or significant effect” on numerous factors, including human health and safety.
“In healthcare contexts, the medically relevant functions of medical devices; patient diagnosis, risk assessment or treatment; the allocation of care in the context of public insurance; or the control of health-insurance costs and underwriting” are considered high-impact use cases, according to the memo.
The directive preserves some policies from the Biden administration, including the requirement to identify chief AI officers and their interagency council, and establish a specialized oversight process for “high-impact” applications.
The second memo (M-25-22) provides guidelines on purchasing AI in government, which are similar to Biden’s guidance. The memo spotlights three themes: a competitive AI marketplace, tracking AI performance while managing risk and promoting the acquisition of AI through cross-functional engagement.
However, the memo adds a new policy of buying American and “maximizing the use of AI products and services that are developed and produced in the United States.”
Additionally, it adds a 200-day deadline for the General Services Administration (GSA) to coordinate with the OMB to develop a “web-based repository, available only to Executive Branch agencies, to facilitate the sharing of information, knowledge and resources about AI acquisition.”
The OMB and GSA coordination was also part of Biden’s memo, but no deadline requirement was provided with his guidance.
THE LARGER TREND
President Trump revoked Biden’s 2023 executive order on his first day in office during his second term.
Biden’s order aimed to establish standards for the safe, secure and trustworthy development of AI across various sectors, including healthcare. It required HHS to establish an AI safety program and developers to share test results, among other directives.
Trump’s executive order, “Initial Recissions of Harmful Executive Orders and Actions,” included the revocation of Biden’s order, along with 66 other executive orders Biden signed and 11 Presidential Memoranda.
Shortly after the revocation, Trump signed an executive order on AI stating, “the United States has long been at the forefront of artificial intelligence innovation, driven by the strength of our free markets, world-class research institutions and entrepreneurial spirit.”
It said that to maintain leadership, the U.S. must develop AI systems “free from ideological bias or engineered social agendas.”
In 2019, during his first term in office, Trump signed an executive order to spur American AI innovation. The order called for the government to promote technical education and apprenticeships and boost STEM and computer science in schools and universities, especially for women and girls.
Other provisions included increasing AI researchers’ access to federal data and other computational resources; calling for regulatory agencies to set guidance for AI development and use throughout the economy, including healthcare; directing NIST to develop technical and safety standards for AI systems; and “promoting a responsible approach to AI by encouraging transformative applications” of the technology.