OpenAI has unveiled Daybreak, a cybersecurity initiative that merges the company’s large language models with its Codex agentic framework to assist organizations in identifying, patching, and validating software vulnerabilities across the development lifecycle.

The platform is structured around three model tiers:

  • GPT-5.5 for general-purpose use,
  • GPT-5.5 with Trusted Access for Cyber for verified defensive security workflows, and
  • GPT-5.5-Cyber, a more permissive variant designed for specialized use cases such as authorized red-teaming and penetration testing.

Each tier includes varying safeguard levels and access controls, with the most capable tier incorporating stronger identity verification and account-level oversight.

"For cyber defense, it means seeing risk earlier, acting sooner, and helping make software resilient by design," a company blog post reads.

OpenAI did not respond to CyberScoop’s request for further comment.

Daybreak’s Arrival Amidst Rising AI Cybersecurity Initiatives

Daybreak debuts weeks after Anthropic introduced Project Glasswing, a cybersecurity-focused AI system built around Claude Mythos Preview. Anthropic describes Mythos as capable of autonomously identifying software vulnerabilities at scale. However, access to Mythos remains tightly restricted due to safety and national security concerns, and the model is not commercially available.

A Tiered Approach to AI Access and Risk Mitigation

The structure of Daybreak reflects a deliberate effort to balance access against the risks posed by these models. The standard GPT-5.5 model is available for general enterprise and developer use. GPT-5.5 with Trusted Access for Cyber targets security professionals engaged in defensive workflows, including vulnerability triage, malware analysis, detection engineering, and patch validation. GPT-5.5-Cyber, the highest-capability tier, is currently in preview and reserved for specialized workflows under controlled conditions.

OpenAI has framed the access controls as a response to the dual-use nature of the underlying technology. The same AI capabilities that enable defenders to understand codebase relationships, identify subtle vulnerabilities, and accelerate remediation could potentially be misused, the company acknowledged.

The platform pairs expanded capability with what OpenAI describes as trust, verification, proportional safeguards, and accountability.

"We don’t think it’s practical or appropriate to centrally decide who gets to defend themselves," the company said in a prior blog post related to the Trusted Access for Cyber program. "Instead, we aim to enable as many legitimate defenders as possible, with access grounded in verification, trust signals, and accountability."

Industry and Government Adoption

Several major technology and cybersecurity companies are already participating in the Trusted Access for Cyber framework, including Cisco, Oracle, CrowdStrike, Palo Alto Networks, Cloudflare, Fortinet, Akamai, and Zscaler.

Anthony Grieco, Cisco’s chief security and trust officer, called the technology a "force multiplier for defenders," noting that models like GPT-5.5 are transforming the pace of security operations, from incident investigation to proactive exposure reduction. He emphasized that the value lies not in the model alone but in the enterprise framework built around it.

At the federal level, the Trump administration is evaluating how Anthropic’s Mythos could be used to protect government networks. Federal CIO Greg Barbaccia told CyberScoop last month that he sees potential in strengthening federal cyber defenses but acknowledged significant uncertainties about its implementation.

Source: CyberScoop