A coalition of international government agencies, including members of the G7, released guidance on May 21, 2024, outlining the essential components that AI 'ingredients list' tools should include to enhance AI security. The concept, known as a software bill of materials (SBOM) for AI—sometimes referred to as an AISBOM or AIBOM—aims to provide a comprehensive inventory of all elements within an AI system.
The guidance, developed by agencies such as the Cybersecurity and Infrastructure Security Agency (CISA), sets voluntary minimum standards for AISBOMs. It builds upon previous efforts to standardize SBOMs across other types of software.
“While not exhaustive or mandatory, the supplemental minimal elements outlined in this guidance reflect the consensus of G7 experts and will expand over time to keep pace with the rapid advancement of AI technology,” stated CISA.
Key Elements of the AISBOM Guidance
The guidance specifies several categories that AISBOMs should cover:
- AISBOM Information: Details about the SBOM itself.
- AI System Overview: Comprehensive information about the AI system.
- Model Identification: Identification of the AI models used within the system.
- Dataset Information: Details on datasets used throughout the model’s lifecycle.
- Infrastructure Requirements: Information on both physical and virtual infrastructure needed to operate and support the AI system.
- Cybersecurity Measures: Security protocols applied to AI models and systems.
- Key Performance Indicators (KPIs): Metrics to evaluate the AI system’s performance.
Industry Reactions to the Guidance
Three industry professionals, all with experience in AISBOM development, shared their perspectives on the guidance with CyberScoop. While they praised the initiative, they also identified areas for improvement.
“Pretty much every piece of software out there is now going to have AI incorporated into it, and when a hospital is buying an AI-enabled medical device, or the Department of Defense is buying an AI-enabled weapon system, or auto manufacturers are putting AI into cars, we need to be able to trust what AI is in those systems. And the first step to trust is to identify what is this AI, where did it come from? How is it trained?”
— Daniel Bardenstein, CEO of Manifest Cyber
Bardenstein, who has developed an AIBOM generator and collaborated with CISA and the OWASP Foundation on AISBOMs, called the guidance “a strong, applaudable step towards getting everybody on the same page that this is the future of how we need to think about trusting AI.”
“This is amazing because it covers 80 to 90% of what’s needed. There was no baseline, but it now will put out a clear baseline.”
— Dmitry Raidman, Co-founder and CTO of Cybeats
Raidman, who has also built an AIBOM generator and worked with CISA and OWASP on AISBOMs, emphasized the significance of establishing a baseline for AI transparency.
However, Bardenstein raised concerns about the practical implementation of the guidance, while Raidman noted that it does not fully address certain critical issues.