OpenAI’s latest post on running Codex safely offers a look at how the company is trying to keep its coding agent inside tighter guardrails as it takes on more work on behalf of users. The write-up walks through the controls OpenAI uses internally to keep Codex productive without giving it unfettered access to systems, networks, or sensitive actions.
The company describes a setup built around bounded execution, approvals, and managed policies that can slow Codex down when an action looks risky. OpenAI also points to controls around identity and credentials, along with rules meant to separate ordinary development commands from actions that appear more sensitive.
There is also a strong emphasis on logging and auditability. OpenAI outlines how Codex can surface agent-aware telemetry, including prompts, approval decisions, tool activity, and network events, so security teams can review not just what happened, but also the context around it.
The full post goes deeper into the configuration model and how the company ties these pieces together across different Codex surfaces.
Source: OpenAI

