OpenAI’s Codex is being described as “a lot more powerful” in a widely reposted update that points to four concrete areas of expansion: computer use, an in-app browser, image generation and editing, and 90+ new plugins for connecting Codex to other tools and services.
What’s actually in the update
The post itself is short, but it’s specific about the surface area that’s growing:
- Computer use: framed as a new capability category, implying Codex can operate beyond a narrow prompt-in/prompt-out workflow.
- In-app browser: suggests browsing is now integrated into the Codex experience rather than being something handled externally.
- Image generation and editing: adds multimodal creation and modification alongside the more familiar coding-oriented use cases.
- 90+ new plugins: a notable jump in integrations, with the stated goal of connecting Codex broadly.
Why these additions matter for AI-assisted coding workflows
Taken together, the set reads like an attempt to turn Codex into more of a hub for end-to-end tasks—moving between browsing, tool connections, and content creation in a single loop—rather than a standalone coding model that only emits code.
Original source: https://x.com/thsottiaux/status/2044882457955377186

