"After reading Anthropic's internal safety assessment report on the Claude Mythos Preview, as a veteran who has been in the cybersecurity trenches for many years, I've been doing a lot of thinking, and I'd like to share some thoughts from a practical and industry perspective.
This is absolutely not on the scale of the old "AI helps you write a phishing script" or "AI helps you find a code vulnerability." The report states plainly: Mythos can autonomously discover 0-day vulnerabilities buried deep in operating systems for 20+ years — with zero human intervention — and automatically write expert-level, extremely complex exploit chains (such as JIT heap spraying and bypassing KASLR to obtain the highest system privileges).
Put simply: what used to require an entire nation-state hacking team (APT) pulling all-nighters and going bald over — nuclear-weapon-grade cyberattack tools — can now be mass-produced automatically overnight by an American AI model at an API cost of about $50.
Since the ones holding this "fully automatic machine gun" are America's top AI companies, and this AI company harbors hostility toward China (their Glass Program includes not a single Chinese security vendor), we must abandon all illusions. This thing represents a dimensionality-reduction strike against China's national cybersecurity and our domestic cybersecurity industry. So how do we respond?
I. The Sword of Damocles Hanging Over National Security
When people read the news, they might feel that the cyber warfare between great powers is distant from their lives. But in reality, real cyber competition comes down to who holds the most unknown vulnerabilities (0-days).
1. The offensive-defensive resource gap has been instantly stretched into a "generational gap"
In the past, whether it was the American NSA or other nations' cyber forces, stockpiling 0-day vulnerabilities required enormous human capital. Top-tier security researchers are scarce globally — the competition was a battle of minds.
But now, what Mythos demonstrates is the capability for "fully automated machine discovery + fully automated weaponization." This means the adversary now commands an elite hacker army that requires no salaries, no sleep, and whose offensive scale is limited only by available computing power. This is a dimensionality-reduction strike of computational power over human power — like we're still sharpening swords in the age of cold steel while the other side has just rolled out a fully automatic Gatling gun.
2. Our "Xinchuang" infrastructure could face carpet-bombing scans
In recent years we've been pushing "Xinchuang" (domestic technology substitution) — domestic operating systems, databases, and government clouds. But if we're being honest, even genuinely domestic software inevitably draws heavily from and embeds global open-source code (Linux kernel, various network protocol stacks, cryptographic libraries, etc.).
What did Mythos target in the report? FreeBSD, OpenBSD, FFmpeg — all core, foundational open-source libraries. If the adversary deploys powerful AI to "carpet-clean" these same open-source codebases we also use, digging up hundreds of 0-days to stockpile without disclosing them — then should an extreme situation arise, our critical information infrastructure (power grids, finance, transportation) could be as fragile as paper in their hands, collapsing at a single touch.
II. An Earthquake for China's Domestic Cybersecurity Industry: The Era of Selling "Signature Databases" Is Finished
Let's talk about China's domestic cybersecurity industry. Despite rapid growth in recent years, there's an open secret in the field: the vast majority of security protection is fundamentally driven by "compliance" — everyone is selling blacklist-based boxes.
What does "blacklist-based" mean? Simply put, traditional firewalls (WAF), intrusion detection systems (IDS), and antivirus software are like a security guard holding a "wanted criminal registry." Whenever an attack has occurred before, security vendors record the characteristics (hash values) or attack strings of that malicious code and add them to the registry. The next time the same attack comes along, the guard checks it: "Hey, you're on the list" — and blocks it.
But against a Mythos-level AI, these products will be reduced to scrap metal. Why?
1. Attacks become "a thousand faces for a thousand people" — signature databases are completely blind
Mythos doesn't just discover 0-days no one has seen before (which simply aren't in the registry). Even more terrifying: even for known vulnerabilities, the attack code (exploit) the AI generates looks different every single time. It can automatically obfuscate code and automatically generate multi-stage, complex attack payloads. It's like a wanted criminal who not only gets plastic surgery, but also changes their fingerprints and DNA on demand. Your old blacklist rules that rely on matching fixed signatures will see their interception rates drop to absolute zero against this kind of dynamic, polymorphic attack.
2. The golden patching window has been completely eliminated
In the past, even when a high-severity vulnerability was publicly disclosed (an N-day) and a vendor released a patch, enterprise IT operators typically felt "no rush — we'll patch over the weekend, or do a unified rollout next month." That's because turning a publicly disclosed vulnerability into an automated tool capable of actually penetrating an internal network typically took hackers several weeks.
But the report states clearly: given a CVE number and patch code, Mythos can fully automatically produce a privilege-escalation-to-root exploit in less than a day. This means that in the future, the moment a vulnerability is disclosed, automated attacks could be hammering your door within hours. The habit of Chinese enterprise and government clients patching "once a month" is essentially handing free kills to AI hackers.
III. Ordinary Security Products Can't Defend Against AI Hackers — So What Do We Do?
Are we simply waiting to die? The traditional "guard the gate" mindset must be completely abandoned. We need to fight magic with magic, and focus on developing the following categories of disruptive defensive technologies:
Core Solution #1: Aggressively develop anomaly behavior detection and deception technology
Since AI hackers can silently bypass the front door with a 0-day (a master key), we shouldn't just stand guard at the door — we need to fill the rooms with traps. This is the essence of honeypots and deception defense.
How can an ordinary person understand honeypots? Imagine you're wealthy and want to prevent theft. You don't just install a security door (traditional firewall) — you also place an extremely convincing "fake safe" (a honeypot) in the living room, filled with fake gold (fabricated core data), wired to a silent alarm. A thief (AI hacker) uses the master key to open the door — the traditional guard never notices. But once inside, the thief will inevitably rummage around looking for high-value targets. The moment they touch that fake safe — click — the alarm goes off, and you've caught them in the trap.
Why is deception defense so effective against AI hackers? No matter how intelligent the AI model, its attacks are based on logic and exploration. After entering the internal network, it will inevitably scan ports, read credential files, and attempt lateral movement. We scatter large numbers of fake network segments and fake credentials throughout the internal network (for example, deliberately leaving what appears to be a high-privilege database account and password). The AI cannot distinguish real from fake at the code level. The moment it greedily uses one of those fake credentials, we instantly capture its trail. Deception defense doesn't care what impressive vulnerability you used to get in — it delivers a dimensionality-reduction strike directly against your attack behavior. This is the most effective and most cost-efficient approach to dealing with unknown 0-days in the future.
Core Solution #2: AI-based behavioral monitoring and detection & response
We've established that signature-based "blacklists" are dead. So how do we monitor? The answer: build dynamic "whitelists" and AI-based user behavioral baseline analysis.
A plain-language example: Company accountant Old Wang normally logs into the system every morning at 9 a.m., checks some reports, prints a few documents — a very consistent traffic pattern. Then one night at 2 a.m., Old Wang's account not only logs in, but starts frantically reading the underlying R&D code repository and attempting to send hundreds of gigabytes of data to an unfamiliar overseas IP address.
At this point, even though no security device has detected any known "hacker attack code," Old Wang's behavior is already extremely abnormal. The new generation of security products actually runs large language models internally. Their job is not to find hacker code, but to use AI to learn normal behavioral baselines across the entire network. The moment even a slight deviation from the baseline appears — even if you entered via the universe's most elite 0-day, even if you're impersonating a legitimate user — as long as your goal is to steal data or cause damage, your behavioral footprint will inevitably trigger the AI monitoring alert. This is "catching someone in the act, not checking passwords."
Core Solution #3: We must train China's own specialized cybersecurity large model
National defense cannot be outsourced; security cannot be borrowed. Anthropic has built Mythos to perform pre-emptive scanning of America's critical infrastructure. China must have its own dedicated cybersecurity large model, with capabilities that match or surpass it.
Offense is the best defense: Our national teams and leading cybersecurity enterprises must use our own AI to conduct a rigorous round of "AI-automated red-blue team adversarial exercises" before products go live and before code enters the Xinchuang procurement library. We must discover and patch our own 0-days first — we absolutely cannot leave our vulnerabilities for adversaries to find.
Automated security response: When facing hundreds or thousands of automated attack alerts, human operators simply cannot keep up with reviewing logs. We need AI defenders to go up against AI hackers. We need AI to detect attacks, and when anomalies or attacks are detected, the defensive large model must complete attribution, IP blocking, and infected host isolation within seconds — achieving microsecond-scale machine-versus-machine confrontation.
The Claude Mythos Preview report proves that cybersecurity is no longer an era where a few lines of rules and a stack of firewalls are enough to sleep soundly.
Faced with fully automated weapons, China's enterprise and government clients and regulatory bodies need to wake up — those stacked legacy security appliances bought to pass inspections, those rigid compliance metrics, won't hold for even one second against real AI hackers.
The times have changed. Comprehensively pivoting to deception defense, AI-powered dynamic behavioral monitoring, and zero-trust architecture, and accelerating the improvement of genuine emergency response capabilities (MTTR) at both the national and enterprise levels, is the only way we can survive in this invisible, smoke-free digital security arms race."