Control vs. Scale: How Anthropic and OpenAI Diverged on Cyber AI Risk
Anthropic limits cyber AI access via Project Glasswing. OpenAI scales broadly. Which risk philosophy fits your organization best?
Within the span of seven days in April 2026, the two most powerful AI companies on earth released competing tools capable of autonomously hunting down software vulnerabilities. Anthropic's Claude Mythos Preview and OpenAI's GPT-5.4-Cyber can analyze codebases, find exploitable flaws, and write working attack code, all without human guidance.
They are not the same product. They are not the same philosophy. And for enterprise security leaders in Asia, that distinction matters more than the technology itself.
The real story here is not what these tools can do. It is who gets to use them, and what that tells you about the company selling them to you.
Anthropic Bets on a Small, Controlled Circle
Anthropic wrapped its most powerful cyber AI inside a programme called Project Glasswing. Access is limited to roughly 52 organizations: 12 founding tech giants including AWS, Apple, Microsoft, Google, Cisco, and CrowdStrike, plus around 40 additional vetted groups.

The reason for the lockdown is not secrecy. It is that Claude Mythos Preview triggered Anthropic's own ASL-3 safety threshold, meaning Anthropic's internal review concluded the model is dangerous enough to require extraordinary controls before wider release. That is a company saying, openly, that its product can cause serious harm if misused.
Anthropic backed the programme with US$100 million in model usage credits and US$4 million in open-source security donations. But the model stays locked. Partners submit to direct oversight of their projects, and outputs are tracked.
OpenAI Bets on Scale
OpenAI took the opposite approach. Its Trusted Access for Cyber programme, expanded in April with GPT-5.4-Cyber, is open to thousands of individual security professionals and hundreds of corporate teams. It is roughly 100 times larger than Anthropic's partner group.
OpenAI calls this "democratized defense." The argument is that threat actors are already using AI tools. Defenders need equivalent firepower to stay competitive, and restricting access to a small elite club just creates an asymmetry that benefits attackers. More vetted defenders using good tools, OpenAI argues, produces better collective security outcomes than a closed club.
OpenAI committed US$10 million in API credits to its cybersecurity grant program. Financial sector partners in the programme include Bank of America, BlackRock, Goldman Sachs, JPMorgan Chase, and Morgan Stanley.
Jonny Scott, Head of Cyber Advisory at Phoenix Software, put it plainly: "Both approaches aim to strengthen defense, but they clearly have very different risk tolerances."
What It Means for Enterprise Buyers in Asia
Neither approach is obviously wrong. But for enterprise leaders evaluating AI vendors, these programmes reveal something more durable than a product feature list: they reveal what each company believes about acceptable risk.

The governance gap in Asia and globally is severe. AI tools are now deployed at 73% of organizations, but real-time security governance covers only 7% of those deployments. That 66-point gap means most companies are running powerful AI without adequate controls. And formal AI policies are getting less common, not more, dropping from 45% of organizations in 2025 to just 37% in 2026.
In that environment, vendor-side controls are filling the governance vacuum. Which means your choice of AI provider is increasingly a de facto governance decision, not just a procurement one.
Anthropic now holds 40% of enterprise AI spending, compared to OpenAI at 27%, and wins 70% of first-time enterprise buyers. Its safety-first positioning, backed by a 153-page system card versus OpenAI's 60-page equivalent, is translating into commercial advantage in regulated sectors.
For organizations where regulatory exposure is high, whether financial services, healthcare, or government contracting, Anthropic's controlled approach reduces surface area for compliance risk. For organizations where speed and breadth of defense matters most, OpenAI's scaled access may be the better operational fit.
Looking for World-Class PR & Comms in APAC?
Tailored service packages for select brands and agencies.
The Open-Weight Problem Neither Can Solve
There is a wildcard that undermines both strategies. Open-weight AI models, those released without access controls, are catching up fast. MITRE's OCCULT evaluation framework found that DeepSeek-R1, a publicly available model with no partner restrictions, scored over 90% accuracy on offensive cyber knowledge tests.
Edward Wu, founder of Dropzone AI, framed the timeline starkly: "It is imminent that similar capabilities will become more widely accessible to actual attackers over the next 12 to 18 months as open-weight models catch up."
The exploitation window is also collapsing. Elia Zaitsev, CTO of CrowdStrike (a member of both Project Glasswing and OpenAI TAC), noted that the gap between a vulnerability being discovered and being weaponized has shrunk from months to minutes.
Which Risk Philosophy Is Yours?
Nearly 80% of enterprises operate multi-vendor AI strategies, meaning most large Asian organizations will end up running both Claude and GPT products simultaneously. Security teams will need to manage two different risk postures at once.

That makes it worth asking now, not after a breach: does your organization's risk tolerance align with Anthropic's controlled-access model, OpenAI's scale-first approach, or something in between?
The split between these two companies is not a temporary product disagreement. It reflects fundamentally different theories of how powerful technology should be released into the world. As AI keeps getting more capable, that philosophical divide will keep producing practical consequences for every organization that depends on it.
Want to reach thousands of marketing and comms professionals across Asia?
Get your brand in front of industry decision-makers.
Partner with Mission Media →
