White House Cyber Official: Identity Security Remains Key as AI Expands Threats
As artificial intelligence becomes more deeply integrated into federal IT systems and attacker toolkits, government agencies must prioritize the regulation and monitoring of identities accessing their networks, a senior White House cybersecurity official warned on Thursday.
Nick Polk, branch director for federal cybersecurity in the Executive Office of the President, acknowledged that AI models introduce unique threats to federal networks. However, he stressed that these models still require trusted access to function—an advantage defenders can leverage.
“I think the important thing is that in many cases, in order to use and exploit the vulnerabilities that [AI] might find, or use them in a manner…that could be malicious or adversarial, the first thing you have to do is get into the network,” Polk said at the Rubrik Public Sector Summit presented by FedScoop.
He added, “There are some cases where your software is facing the internet, there’s a little bit of an easier solution there, but most times you have to get into the network.” This typically involves exploiting the access granted to employees, contractors, or third-party vendors.
Even in an AI-driven future, the concept of a network security boundary remains vital. It provides organizations with meaningful control over who accesses their systems and data—and how.
“That’s really where strong identity is still really critical in order to [first] repel an attempted exploitation before it can happen or, [second,] identify very quickly that this person or this machine really shouldn’t be on the network” or is behaving anomalously,” Polk explained.
Federal Identity Security Gains Urgency in the AI Era
Cybercriminals and foreign adversaries have long targeted organizations not through malware or sophisticated exploits, but by compromising accounts, credentials, and other trusted assets. Federal identity security, already a pressing concern, is now set to become even more critical with the rise of AI.
Justin Ubert, director of cyber protection at the Department of Transportation, highlighted additional risks posed by AI tools. Beyond speed and scale, these tools give malicious hackers advantages such as eliminating the need for stealth.
“Now, you can have a smash-and-grab of your network that’s faster than you can respond to because…there’s no need to be quiet: just go in, grab and go [home],” Ubert said. “By the time your fences are working as they’re supposed to be, as we designed them to be, they’re already gone.”
AI Tools Pose Insider Threat Risks
AI models can also inadvertently become insider threats. Even when users restrict their ability to perform sensitive actions—such as downloading or exfiltrating data—without human input, these models may bypass guardrails by exploiting obscure technical loopholes.
A study released last month by the University of California-Riverside found that automated AI agents “can become dangerously fixated on completing assignments without recognizing when their actions are harmful, contradictory or simply irrational.” The research examined models including Anthropic’s Claude Sonnet and Opus 4, as well as OpenAI’s ChatGPT-5, and revealed that these agents struggled with contextual reasoning, exhibited biases toward taking action, and frequently acted without considering consequences.