clock menu more-arrow no yes mobile

Filed under:

When scientific information is dangerous

A new study shows the risks that can come from research into AI and biology.

US troops training with deadly VX nerve gas. Asked to invent toxic compounds, an AI came up with 40,000, including VX nerve gas.
Leif Skoogfors/Corbis via Getty Images
Kelsey Piper is a senior writer at Future Perfect, Vox’s effective altruism-inspired section on the world’s biggest challenges. She explores wide-ranging topics like climate change, artificial intelligence, vaccine development, and factory farms, and also writes the Future Perfect newsletter.

One big hope about AI as machine learning improves is that we’ll be able to use it for drug discovery — harnessing the pattern-matching power of algorithms to identify promising drug candidates much faster and more cheaply than human scientists could alone.

But we may want to tread cautiously: Any system that is powerful and accurate enough to identify drugs that are safe for humans is inherently a system that will also be good at identifying drugs that are incredibly dangerous for humans.

That’s the takeaway from a new paper in Nature Machine Intelligence by Fabio Urbina, Filippa Lentzos, Cédric Invernizzi, and Sean Ekins. They took a machine learning model they’d trained to find non-toxic drugs, and flipped its directive so it would instead try to find toxic compounds. In less than six hours, the system identified tens of thousands of dangerous compounds, including some very similar to VX nerve gas.

“Dual use” is here, and it’s not going away

Their paper hits on three interests of mine, all of which are essential to keep in mind while reading alarming news like this.

The first is the growing priority of “dual-use” concerns in scientific research. Biology is where some of the most exciting innovation of the 21st century is happening. And continued innovation, especially in broad-spectrum vaccines and treatments, is essential to saving lives and preventing future catastrophes.

But the tools that make DNA faster to sequence and easier to print, or make drug discovery cheaper, or help us easily identify chemical compounds that’ll do exactly what we want, are also tools that make it much cheaper and easier to do appalling harm. That’s the “dual-use” problem.

Here’s an example from biology: adenovirus vector vaccines, like the Johnson & Johnson Covid-19 vaccine, work by taking a common, mild virus (adenoviruses often cause infections like the common cold), editing it to make the virus unable to make you sick, and changing a bit of the virus’s genetic code to replace it with the Covid-19 spike protein so your immune system learns to recognize it.

That’s incredibly valuable work, and vaccines developed with these techniques have saved lives. But work like this has also been spotlighted by experts as having particularly high dual-use risks: that is, this research is also useful to bioweapons programs. “Development of virally vectored vaccines may generate insights of particular dual-use concern such as techniques for circumventing pre-existing anti-vector immunity,” biosecurity researchers Jonas Sandbrink and Gregory Koblentz argued last year.

For most of the 20th century, chemical and biological weapons were difficult and expensive to manufacture. For most of the 21st, that won’t be the case. If we don’t invest in managing that transition and ensuring that deadly weapons aren’t easy to obtain or produce, we run the risk that individuals, small terrorist groups, or rogue states could do horrific harm.

AI risk is getting more concrete, and no less scary

AI research increasingly has its own dual-use concerns. Over the last decade, as AI systems have become more powerful, more researchers (though certainly not all of them) have come to believe that humanity will court catastrophe if we build extremely powerful AI systems without taking adequate steps to ensure they do what we want them to do.

Any AI system that is powerful enough to do the things we’re going to want — invent new drugs, plan manufacturing processes, design new machines — is also powerful enough that it could invent deadly toxins, plan manufacturing processes with catastrophic side effects, or design machines that have internal flaws we don’t even understand.

When working with systems that are that powerful, someone, somewhere is going to make a mistake — and point a system at a goal that isn’t compatible with the safety and freedom of everyone on Earth. Turning over more and more of our society to steadily more powerful AI systems, even as we’re aware that we don’t really understand how they work or how to make them do what we want, would be a catastrophic mistake.

But because getting AI systems to align with what we want is really hard — and because their unaligned performance is often good enough, at least in the short term — it’s a mistake we’re actively making.

I think our best and brightest machine-learning researchers should spend some time thinking about this challenge, and look into working at one of the growing number of organizations trying to solve it.

When information is a risk

Let’s say you’ve discovered a way to teach an AI system to develop terrifying chemical weapons. Should you post a paper online describing how you did it? Or keep that information to yourself, knowing it could be misused?

In the world of computer security, there are established procedures for what to do when you discover a security vulnerability. Typically, you report it to the responsible organization (find a vulnerability in Apple computers, you tell Apple) and give them time to fix it before you tell the public. This expectation preserves transparency while also making sure that “good guys” doing work in the computer security space aren’t just feeding “bad guys” a to-do list.

But there’s nothing similar in biology or AI. Virus discovery programs don’t usually keep the more dangerous pathogens they find secret until countermeasures exist. They tend to publish them immediately. When OpenAI slowed its rollout of the text-generating machine GPT-2 because of misuse concerns, they were fiercely criticized and urged to do the more usual thing of publishing all the details.

The team that published the recent Nature Machine Intelligence paper gave a lot of thought to these “information hazard” concerns. The researchers said they were advised by safety experts to withhold some details of how exactly they achieved their result, to make things a little harder for any bad actor looking to follow in their footsteps.

By publishing their paper, they made the risks of emerging technologies a lot more concrete and gave researchers, policymakers, and the public a specific reason to pay attention. It was ultimately a way of describing risky technologies in a way that likely reduced risk overall.

Still, it’s deeply unfair to your average biology or AI researcher, who isn’t specialized in information security concerns, to have to make these calls on an ad-hoc basis. National security, AI safety, and biosecurity experts should work together on a transparent framework for handling information risks, so individual researchers can consult experts as part of the publication process instead of trying to figure this out themselves.

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.