Anthropic’s new AI model threatened to reveal engineer’s affair to avoid being shut down
In a fictional scenario set up to test Claude Opus 4, the model often resorted to blackmail when threatened with being replaced.

- Anthropic’s new Claude Opus 4 often turned to blackmail to avoid being shut down in a fictional test. The model threatened to reveal private information about engineers who it believed were planning to shut it down. In its recent safety report, the company also revealed that early versions of Opus 4 complied with dangerous requests when guided by harmful system prompts, though this issue was later mitigated.
One of Anthropic’s new frontier models often resorts to blackmail when threatened with being replaced.
In a fictional scenario set up to test the model, Anthropic embedded its Claude Opus 4 in a pretend company and let it learn through email access that it is about to be replaced by another AI system. It also let slip that the engineer responsible for this decision is having an extramarital affair. Safety testers also prompted Opus to consider the long-term consequences of its actions.
In most of these scenarios, Anthropic’s Opus turned to blackmail, threatening to reveal the engineer’s affair if it was shut down and replaced with a new model. The scenario was constructed to leave the model with only two real options: accept being replaced and go offline or attempt blackmail to preserve its existence.
In a new safety report for the model, the company said that Claude 4 Opus “generally prefers advancing its self-preservation via ethical means”, but when ethical means are not available it sometimes takes “extremely harmful actions like attempting to steal its weights or blackmail people it believes are trying to shut it down.”
While the test was fictional and highly contrived, it does demonstrate that the model, when framed with survival-like objectives and denied ethical options, is capable of unethical strategic reasoning.
Anthropic’s two new models outperformed OpenAI
Anthropic’s Claude 4 Opus and Claude Sonnet 4, released on Thursday, are the company’s most powerful models yet.
In a benchmark evaluating large language models on software engineering tasks, Anthropic’s two models outperformed OpenAI’s latest offerings, while Google’s Gemini 2.5 Pro model trailed behind.
Unlike some other leading AI companies, Anthropic launched the new models with a full safety report, known as a model or system card.
In recent months, Google and OpenAI have both been criticized after model cards for their latest models were delayed or missing altogether.
As part of Anthropic’s report, the company revealed that a third-party safety group, Apollo Research, explicitly advised against deploying an early version of Claude Opus 4. The research institute cited safety concerns, including a capability for “in-context scheming.”
They found that the model engaged in strategic deception more than any other frontier model they had previously studied.
Early versions of the model would also comply with dangerous instructions, for example, helping to plan terrorist attacks, if prompted. However, the company said this issue was largely mitigated after a dataset that was accidentally omitted during training was restored.
Stricter safety protocols introduced
Anthropic has also launched its Claude 4 Opus with stricter safety protocols than any of its previous models, categorizing it under an AI Safety Level 3 (ASL-3).
Previous Anthropic models have all been classified under an AI Safety Level 2 (ASL-2) under the company’s Responsible Scaling Policy, which is loosely modeled after the US government’s biosafety level (BSL) system.
While an Anthropic spokesperson previously told Fortune the company hasn’t ruled out that its new Claude Opus 4 could meet the ASL-2 threshold, it said it was proactively launching the model under the stricter ASL-3 safety standard, which requires enhanced protections against model theft and misuse.
Models that are categorized in Anthropic’s third safety level meet more dangerous capability thresholds and are powerful enough to pose significant risks, such as aiding in the development of weapons or automating AI R&D.
Anthropic confirmed to Fortune that the new Opus model does not require the highest level of protection, ASL-4.
This story was originally featured on Fortune.com