The Trump administration has reversed its stance on AI safety regulations, signing agreements with Google DeepMind, Microsoft, and xAI to conduct government safety checks on their frontier AI models before and after release.
Previously, former President Donald Trump had dismissed the Biden-era policy, calling voluntary safety checks unnecessary overregulation that stifled innovation. Upon taking office, his administration rebranded the US AI Safety Institute as the Center for AI Standards and Innovation (CAISI), omitting "safety" from the name in a direct critique of the previous administration’s approach.
However, the administration’s position shifted after Anthropic announced it would not release its latest Claude Mythos model due to risks of misuse by bad actors. The model’s advanced cybersecurity capabilities raised concerns about potential exploitation.
According to Kevin Hassett, Director of the White House National Economic Council, Trump may soon issue an executive order mandating government testing of advanced AI systems prior to release. Fortune first reported the potential policy change.