There was a hot zone of debate in scientific circles this week triggered by an article in Nature Machine Intelligence
by Collaborations Pharmaceuticals, Inc.
’s Fabio Urbina
and three co-authors that explored what would happen if artificial intelligence tools for drug discovery were repurposed to find the most lethal toxins to the human body. Unsurprisingly, results were immediate and grim:
In less than 6 hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired threshold. In the process, the AI designed not only [the nerve agent] VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible. These new molecules were predicted to be more toxic … than publicly known chemical warfare agents.
Given our deep interest of the intersection of bio and machine learning, the article made a splash over here. Josh Wolfe
shared the article internally at Lux, and it quickly reverberated online among scientists and national security officials as well. The publication’s timing couldn’t be worse: just as fears of chemical warfare have intensified
with Russia’s invasion of Ukraine, the thought that there might be thousands of toxins worse than VX and novichok just waiting to be discovered with off-the-shelf generative AI models is terrifying. As Urbina, et al. wrote:
The reality is that this is not science fiction. We are but one very small company in a universe of many hundreds of companies using AI software for drug discovery and de novo design. How many of them have even considered repurposing, or misuse, possibilities? Most will work on small molecules, and many of the companies are very well funded and likely using the global chemistry network to make their AI-designed molecules. How many people have the know-how to find the pockets of chemical space that can be filled with molecules predicted to be orders of magnitude more toxic than VX? We do not currently have answers to these questions.
While the implications of the article are indeed terrifying, they are hardly surprising. In fact, bioweapons (and to a much lesser extent, chemical warfare) have been considered the top U.S. national security threat for more than two decades now
. It’s considered far more of a threat than even nuclear proliferation, which generally captures the public’s imagination.
The materials and tools needed to produce bioweapons are more accessible and cheaper than the unique equipment required for nuclear weapons. Due to that latter bottleneck, a multitude of institutions like the Nuclear Suppliers Group ensure that there is an all-but-closed system for monitoring the movement of this critical equipment, giving global intelligence agencies rapid access to information on potential proliferation risks. No such regime is possible for biological research labs, where reagents and wetlab equipment are widely manufactured and broadly available.
There have been intense concerns
that synthetic biology and particularly the development of DNA editing tools like CRISPR could lead to an exponential increase in bioweapon threats. It makes sense, after all. Their precision coupled with greater democratization portended a rapid rise in the capability for bad actors to invent horrific new pathogens to inflict on the world.
Yet, if we take a step back, we realize that such potential is more imaginative than substantive. In their Nature article, Urbina et al. note that they discovered many compounds more toxic than VX. But how toxic is VX in the first place? They write that “a few salt-sized grains of VX (6–10 mg) is sufficient to kill a person.” In other words, it’s essentially as deadly as a substance can possibly be. Going from a few grains of toxin to reach lethality to a single grain of toxin is hardly a major qualitative advance.
That’s indeed a recurring pattern in this field. While there are widespread fears of mad scientists inventing deadly contagions in hidden wetlabs in the caves of Waziristan, the reality is that the world is already familiar with incredibly viral and deadly pathogens. Ebola, as just one choice example, kills roughly half of anyone infected
with a relatively high virality rate. As one former presidential advisor on bioweapons explained to me years ago, Mother Nature is quite efficient at producing terrifying bioweapons all on her own, no mad scientists required (just take a look at the Covid-19 pandemic the past two years). In the end, our public health response to a naturally-occurring pandemic and one that is man-made would be exactly the same.
Urbina et al. emphasize that scientists need to be alert to the dual-use implications of their discoveries. “There has not previously been significant discussion in the scientific community about this dual-use concern around the application of AI for de novo molecule design, at least not publicly,” they write.
That might be literally true in regards to this one niche of science, but it’s wholly inaccurate more broadly. Dual-use concerns in biology have been a perennial subject of concern going back decades, and such concerns aren’t limited to biologists. Many nuclear physicists were just as deeply worried about the prospects of their work accelerating the development of the atomic bomb and its successors. Given my teenage interest in bioweapons, pandemics and biodefense (we all have our youthful phases), my first series of papers at Stanford as an undergraduate analyzed this issue (a paper I continue to have on my personal website
for those insanely curious).
Frankly, these dual-use concerns have become trite. Dual-use is an unsolvable problem in the biological sciences and medicine. A surgeon’s scalpel is a tool for healing as well as murder. Research on pandemics delivers vaccines, while also allowing a malefactor to design a barbarous global plague. All the tools of biology — every wetlab in existence — can be used for good and evil, and sometimes you don’t even need to be evil to cause great harm. As we have seen from discussions of lab leaks the past year, unintentional releases of pathogens are a regular occurrence at biolabs, even at the most secure BSL4 facilities.
So while Urbina et al.’s argument is terrifying, I find it frankly banal and essentially the same story we’ve heard on this subject for decades now.
All that critique aside, there was one element of their tale where I got a bit more edgy. Writing on the potential for using artificial intelligence to find new manufacturing pathways for chemicals, the authors write:
We did not assess the virtual molecules for synthesizability or explore how to make them with retrosynthesis software. For both of these processes, commercial and open-source software is readily available that can be easily plugged into the de novo design process of new molecules. … With current breakthroughs and research into autonomous synthesis, a complete design–make–test cycle applicable to making not only drugs, but toxins, is within reach. Our proof of concept thus highlights how a nonhuman autonomous creator of a deadly chemical weapon is entirely feasible.
One of the only checks on chemical weapons production is that large-scale manufacturing is typically recognizable from satellite imaging as well as from the purchase of manufacturing equipment and chemical precursors. Since we know how these weapons are made, we can search for the right clues to indicate their manufacture (which unfortunately is less relevant to bioweapons since they are viral and self-propagating and thus don’t give off the same scale-related signals).
With new AI tools, it’s not just that alternative pathways could make it simpler to produce these weapons, but that a covert program could hide its tracks by simultaneously using different pathways to obfuscate its true intentions. If there are one hundred unique ways to produce VX, it gets progressively harder to track.
Nonetheless, we return to the same overarching problem: exploring alternative pathways to chemical synthesis can make it easier to produce nerve agents, but also dramatically lower the cost and increase the availability of life-saving treatments. That’s the dual-use dilemma, and it’s never, ever going away. We have to continue pushing forward on science while improving our human institutions to ensure that they mitigate the downsides of these advancements.