OpenAI warns its future models will have a higher risk of aiding bioweapons development
The company is boosting its safety testing as it anticipates some models will reach its highest risk tier.

- OpenAI says its next generation of AI models could significantly increase the risk of biological weapon development, even enabling individuals with no scientific background to create dangerous agents. The company is boosting its safety testing as it anticipates some models will reach its highest risk tier.
OpenAI is warning that its next generation of advanced AI models could pose a significantly higher risk of biological weapon development, especially when used by individuals with little to no scientific expertise.
OpenAI executives told Axios they anticipate upcoming models will soon trigger the high-risk classification under the company’s preparedness framework, a system designed to evaluate and mitigate the risks posed by increasingly powerful AI models.
OpenAI’s head of safety systems, Johannes Heidecke, told the outlet that the company is “expecting some of the successors of our o3 (reasoning model) to hit that level.”
In a blog post, the company said it was increasing its safety testing to mitigate the risk that models will help users in the creation of biological weapons. OpenAI is concerned that without these mitigations models will soon be capable of “novice uplift,” allowing those with limited scientific knowledge to create dangerous weapons.
“We’re not yet in the world where there’s like novel, completely unknown creation of bio threats that have not existed before,” Heidecke said. “We are more worried about replicating things that experts already are very familiar with.”
Part of the reason why it’s difficult is that the same capabilities that could unlock life-saving medical breakthroughs could also be used by bad actors for dangerous ends. According to Heidecke, this is why leading AI labs need highly accurate testing systems in place.
One of the challenges is that some of the same capabilities that could allow AI to help discover new medical breakthroughs can also be used for harm.
“This is not something where like 99% or even one in 100,000 performance is … sufficient,” he said. “We basically need, like, near perfection.”
Representatives for OpenAI did not immediately respond to a request for comment from Fortune, made outside normal working hours.
Model misuse
OpenAI is not the only company concerned about the misuse of its models when it comes to weapon development. As models get more advanced their potential for misuse and risk generally grows.
Anthropic recently launched its most advanced model, Claude Opus 4, with stricter safety protocols than any of its previous models, categorizing it an AI Safety Level 3 (ASL-3), under the company’s Responsible Scaling Policy. Previous Anthropic models have all been classified AI Safety Level 2 (ASL-2) under the company’s framework, which is loosely modeled after the U.S. government’s biosafety level (BSL) system.
Models that are categorized in this third safety level meet more dangerous capability thresholds and are powerful enough to pose significant risks, such as aiding in the development of weapons or automating AI R&D. Anthropic’s most advanced model also made headlines after it opted to blackmail an engineer to avoid being shut down in a highly controlled test.
Early versions of Anthropic’s Claude 4 were found to comply with dangerous instructions, for example, helping to plan terrorist attacks, if prompted. However, the company said this issue was largely mitigated after a dataset that was accidentally omitted during training was restored.
This story was originally featured on Fortune.com