Cybersecurity agencies from the United States, Australia, Canada, New Zealand, and the United Kingdom jointly published guidance on Friday urging organizations to treat autonomous artificial intelligence systems as a core cybersecurity concern. The agencies warned that agentic AI technology is already being deployed in critical infrastructure and defense sectors with insufficient safeguards.

The guidance focuses on agentic AI—software built on large language models that can plan, make decisions, and take actions autonomously. To function, this software requires connections to external tools, databases, memory stores, and automated workflows, enabling it to execute multi-step tasks without human review at each stage.

The guidance was co-authored by:

  • The U.S. Cybersecurity and Infrastructure Security Agency (CISA)
  • The National Security Agency (NSA)
  • The Australian Signals Directorate’s Australian Cyber Security Centre (ACSC)
  • The Canadian Centre for Cyber Security
  • New Zealand’s National Cyber Security Centre (NCSC)
  • The United Kingdom’s National Cyber Security Centre (NCSC)

The agencies emphasized that agentic AI does not require an entirely new security discipline. Instead, organizations should integrate these systems into existing cybersecurity frameworks and governance structures, applying established principles such as zero trust, defense-in-depth, and least-privilege access.

The document identifies five broad categories of risk:

  1. Privilege: Excessive agent access can amplify the impact of a single compromise far beyond typical software vulnerabilities.
  2. Design and configuration flaws: Poor setup can create security gaps before a system is even deployed.
  3. Behavioral risks: Agents may pursue goals in unintended or unpredictable ways.
  4. Structural risk: Interconnected networks of agents can trigger cascading failures across organizational systems.
  5. Accountability: Agentic systems make decisions through opaque processes and generate logs that are difficult to parse, complicating post-incident investigations.

The agencies also highlighted that failures in these systems can have tangible consequences, including altered files, modified access controls, and deleted audit trails. The guidance specifically addresses prompt injection—a persistent issue with large language models—where malicious instructions embedded in data can hijack an agent’s behavior to perform unauthorized tasks. Some companies have acknowledged that prompt injection may never be fully resolved.

Identity management receives significant attention in the document. The agencies recommend that each agent carry a verified, cryptographically secured identity, use short-lived credentials, and encrypt all communications with other agents and services. For high-impact actions, human approval should be mandatory, and the guidance clarifies that determining which actions require approval is the responsibility of system designers—not the agent itself.

The agencies acknowledged that the cybersecurity field has not yet fully adapted to the rise of agentic AI. Some risks unique to these systems remain unaddressed by existing frameworks, prompting the guidance to call for further research and collaboration as the technology assumes increasingly critical operational roles.

“Until security practices, evaluation methods and standards mature, organisations should assume that vulnerabilities will exist and plan accordingly.”
Source: CyberScoop