AI just created a working virus. The U.S. isn’t prepared for that.
A stunning scientific accomplishment brings both great promise and great risk.
Tal Feldman is a student at Yale Law School who formerly built AI and data tools for U.S. government agencies. Jonathan Feldman is a computer science and biology student at Georgia Institute of Technology.
We’re nowhere near ready for a world in which artificial intelligence can create a working virus, but we need to be — because that’s the world we’re now living in.
In a remarkable paper released this month, scientists at Stanford University showed that computers can design new viruses that can then be created in the lab. How is that possible? Think of ChatGPT, which learned to write by studying patterns in English. The Stanford team used the same idea on the fundamental building block of life, training “genomic language models” on the DNA of bacteriophages — viruses that infect bacteria but not humans — to see whether a computer could learn their genetic grammar well enough to write something new.
Turns out it could. The AI created novel viral genomes, which the researchers then built and tested on a harmless strain of E. coli. Many of them worked. Some were even stronger than their natural counterparts, and several succeeded in killing bacteria that had evolved resistance to natural bacteriophages.
The scientists proceeded with appropriate caution. They limited their work to viruses that can’t infect humans and ran experiments under strict safety rules. But the essential fact is hard to ignore: Computers can now invent viable — even potent — viruses.
The Stanford paper is a preprint that has not yet undergone peer review, but this advance suggests enormous promise. The same tools that can conjure new viruses could one day be harnessed to cure disease. Viruses could be engineered to fight antibiotic-resistant bacteria, one of the great crises in global health. Cocktails of diverse AI-designed viruses could treat infections that no existing drug can touch.
But there is no sugarcoating the risks. While the Stanford team played it safe, what’s to stop others from using open data on human pathogens to build their own models? And if that happens, the same techniques could just as easily be used to create viruses lethal to humans — turning a laboratory breakthrough into a global security threat.
For decades, U.S. biosecurity strategy has been built on prevention. Many DNA synthesis companies screen orders to make sure customers aren’t printing genomes of known pathogens. Labs follow safety protocols. Export controls slow the spread of sensitive technologies. These guardrails still matter. But they cannot keep up with the pace and power of AI innovation. Screening systems cannot flag a virus that has never existed before. And no border can block the diffusion of algorithms once they are published online.
Resilience is the only viable answer. If AI collapses the timeline for designing biological weapons, the United States will have to reduce the timeline for responding to them. We can’t stop novel AI-generated threats. The real challenge is to outpace them.
First, the United States needs to build the computational tools required to respond as fast as new threats appear. The same models that design viruses can be trained to quickly design antibodies, antivirals and vaccines. But these models need data — on how immune systems and therapeutics interact with pathogens, on which designs fail in practice and on what manufacturing bottlenecks exist. Much of that information is siloed in private labs, locked up in proprietary datasets or missing entirely. The federal government should make building these high-quality datasets a priority.
Second, we need the physical capacity to turn those computer designs into real medicines. Right now, moving from a promising design to a working drug can take years. What’s needed are facilities on standby that can validate thousands of candidates in parallel, then quickly mass-produce the best ones. The private sector cannot justify the expense of building that capacity for emergencies that may never arrive. Government has to step in, taking the lead with long-term contracts that keep plants ready until the next crisis hits.
Third, regulation must adapt. The Food and Drug Administration’s emergency-use pathways were not built for therapies designed by computers in real time. Needed are new fast-tracking authorities that allow provisional deployment of AI-generated countermeasures and clinical trials, coupled with rigorous monitoring and safety measures. And the entire system has to be stress-tested ahead of a crisis, with regular national exercises that simulate an AI-generated outbreak.
For years, experts have warned that generative biology could collapse the timeline between design and disaster. That moment has arrived. The viruses created in the Stanford experiment were harmless to humans. The next ones might not be.