OpenAI is expanding its Trusted Access for Cyber work as it prepares for “increasingly more capable models” in the months ahead—starting with a new cybersecurity-tuned variant of GPT‑5.4. The update scales the company’s TAC program to thousands of verified individual defenders and hundreds of teams focused on protecting critical software, while also introducing GPT‑5.4‑Cyber, a model trained to be more cyber-permissive for defensive use cases.
A “trusted access” approach built around verification and guardrails
The thrust of TAC is straightforward: keep broad access to general models, but use identity verification and trust signals to unlock more sensitive cyber capabilities without relying on ad hoc, manual approvals. OpenAI frames the overall strategy around three principles:
- Democratized access backed by strong KYC and identity verification, intended to reduce arbitrary decision-making about legitimate use.
- Iterative deployment, with safeguards updated as OpenAI learns more from real-world usage and adversarial pressure (including jailbreak attempts).
- Ecosystem resilience, via grants, open-source security support, and tools designed to accelerate vulnerability discovery and remediation.
This sits alongside earlier initiatives OpenAI cites, including its Cybersecurity Grant Program, the Preparedness Framework, and Codex Security, which it says has contributed to over 3,000 critical and high fixed vulnerabilities since the recent launch.
GPT‑5.4‑Cyber: fewer capability restrictions for vetted defenders
The headline model change is GPT‑5.4‑Cyber, described as a version of GPT‑5.4 that lowers the refusal boundary for legitimate cybersecurity workflows and enables advanced defensive tasks. OpenAI specifically calls out binary reverse engineering as a newly enabled capability—useful for analyzing compiled software for malware potential, vulnerabilities, and security robustness when source code isn’t available.
Because this model is intentionally more permissive, OpenAI says access will roll out in a limited, iterative deployment to vetted security vendors, organizations, and researchers. The post also notes that access to “permissive and cyber-capable models” may include restrictions, particularly for no-visibility uses such as Zero-Data Retention, and especially when models are accessed via third-party platforms where OpenAI has less visibility into context and intent.
How access works: individuals and enterprises
OpenAI outlines two primary on-ramps into TAC:
- Individual defenders can verify identity at chatgpt.com/cyber.
- Enterprises can request trusted access through an OpenAI representative.
Existing TAC customers can also express interest in additional access tiers via a Google Form link: can express interest.
What OpenAI says comes next
OpenAI argues that current safeguards are sufficient for broad deployment of today’s models, but draws a clear line between general availability and cyber-permissive variants, which it says require more restrictive deployments and appropriate controls. Longer-term, the company expects “more expansive defenses” will be needed as future models exceed today’s purpose-built cybersecurity systems.
Original source: https://openai.com/index/scaling-trusted-access-for-cyber-defense/
