White House considers executive order requiring government review of AI models before public release
Transformative AI New!The Trump administration is considering an executive order that would mandate government review of advanced AI models before public release, according to Tom's Hardware and The Hill. The proposal would establish a working group of technology executives and government officials to develop oversight procedures, with the NSA, the White House Office of the National Cyber Director, and the Director of National Intelligence potentially overseeing model reviews.
The discussions represent a sharp reversal for an administration that revoked Biden's AI safety executive order within hours of taking office in January 2025. Kevin Hassett, director of the National Economic Council, told Federal News Network on 7 May that the White House is "studying possibly an executive order" to ensure future AI models "go through a process so that they're released in the wild after they've been proven safe, just like an FDA drug." A White House official subsequently characterised discussion of a potential executive order as "speculation," though the administration confirmed it is balancing innovation with security in AI policymaking.
The shift appears driven by concerns over Anthropic's Mythos model, which the company says can identify thousands of critical software vulnerabilities and has declined to release publicly. The Washington Post reported that the arrival of Mythos "has begun to crack the White House's hard-line stance" on promoting AI technology. The model's capabilities have prompted the administration to brief leaders from Anthropic, Google, and OpenAI on the review plans, according to officials cited by the New York Times. The proposed approach resembles the UK's AI Security Institute, which evaluates frontier models against safety benchmarks before deployment, though Tom's Hardware notes the US currently has no legal authority to require such reviews.
In parallel with the executive order discussions, the Commerce Department's Center for AI Standards and Innovation announced on 6 May that Google DeepMind, Microsoft, and xAI have agreed to voluntary pre-deployment evaluations of their models, joining existing agreements with OpenAI and Anthropic. Federal News Network reported that CAISI has conducted 40 evaluations to date, including on unreleased models. The timing has sparked debate within the AI policy community: a day after the White House proposal was reported, former Trump AI adviser Dean Ball and former Biden AI adviser Ben Buchanan co-authored a New York Times op-ed calling for Congress to mandate third-party audits of AI developers' safety claims. Some critics, including analysts at the Cato Institute, have warned that pre-approval systems could function as a "kill switch" on innovation and were considered heavy-handed even under the Biden administration.
Sentinel forecasters estimate a 32 per cent probability that the US Federal Government will regulate the release of all new AI models from frontier laboratories through executive order or legislation by 3 November 2026. Such a regime would represent a significant departure from the current voluntary framework and introduce pre-deployment review mechanisms analogous to those used in pharmaceuticals and other high-stakes sectors. Legal experts writing in Lawfare note that the president's authority to mandate such vetting without legislation remains uncertain, with the Defense Production Act an unlikely basis and alternative statutes requiring stretched interpretations that courts may not accept.