OpenAI issued a warning on December 10 that its next-generation artificial intelligence models could pose “high” cybersecurity risks as their capabilities advance rapidly. The ChatGPT maker said these upcoming models might develop working zero-day remote exploits against well-defended systems or assist with complex enterprise intrusion operations aimed at real-world effects.
The warning comes as AI companies face growing concerns about the potential misuse of their technology. OpenAI is not alone in preparing for AI-related cybersecurity threats, as other tech companies have also taken steps to protect their systems.
Earlier this week, Google announced upgrades to Chrome browser security to defend against indirect prompt injection attacks that could hijack AI agents. The move came ahead of rolling out Gemini agentic capabilities in Chrome more widely.
In November 2025, Anthropic disclosed that threat actors, possibly a Chinese state-sponsored group, had manipulated its Claude Code tool to carry out an AI-led espionage campaign. Anthropic successfully disrupted the operation.
OpenAI provided specific data showing how quickly AI’s cybersecurity capabilities have advanced. The company’s GPT-5.1-Codex-Max model scored 76% on capture-the-flag challenges in November 2025, up from 27% by GPT-5 in August 2024.
These challenges test a system’s ability to find and exploit security vulnerabilities. The dramatic improvement in just a few months demonstrates the pace at which AI models are developing sophisticated cybersecurity skills.
OpenAI said it is investing in strengthening models for defensive cybersecurity tasks. The company is creating tools that enable defenders to more easily perform workflows such as auditing code and patching vulnerabilities.
To counter cybersecurity risks, OpenAI is implementing a mix of access controls, infrastructure hardening, egress controls, and monitoring. The company said it is training AI models to refuse or safely respond to harmful requests while remaining helpful for educational and defensive use cases.
OpenAI is improving system-wide monitoring across products that use frontier models to detect potentially malicious cyber activity. The company is also working with expert red teaming organizations to evaluate and improve safety mitigations.
The Microsoft-backed company announced Aardvark, an AI agent designed to double as a security researcher. Currently in private beta, Aardvark can scan codebases for vulnerabilities and propose patches that maintainers can adopt quickly.
OpenAI said it will make Aardvark available for free to select non-commercial open source repositories. The tool aims to help defenders who are often outnumbered and under-resourced.
OpenAI will soon introduce a program to explore providing qualifying users and customers working on cyberdefense with tiered access to enhanced capabilities. The company will establish the Frontier Risk Council, an advisory group bringing experienced cyber defenders and security practitioners into close collaboration with its teams.
The council will begin with a focus on cybersecurity and expand into other frontier capability domains in the future.
The post ChatGPT Maker OpenAI Issues Warning About AI Cybersecurity Threats appeared first on CoinCentral.


