The Biden administration is evaluating measures to tighten oversight of artificial intelligence (AI) technologies by potentially implementing a pre-release vetting process for new AI models. A dedicated working group could be tasked with reviewing these models before they become publicly available.

This initiative reflects growing concerns about the rapid advancement of AI and its potential societal impacts, including risks related to misinformation, bias, and safety. The proposed group would assess new AI models for compliance with emerging regulatory standards and ethical guidelines.

While details remain under discussion, the move underscores the administration’s commitment to balancing innovation with responsible AI deployment. Officials have not yet confirmed a timeline for implementation or the specific criteria the working group would use to evaluate models.

This development follows President Biden’s 2023 Executive Order on Safe, Secure, and Trustworthy AI, which outlined a framework for AI governance and directed federal agencies to address risks associated with AI technologies.

Source: Engadget