Hey readers,
One big hope about AI as machine learning improves is that we’ll be able to use it for drug discovery – harnessing the pattern-matching power of algorithms to identify promising drug candidates much faster and more cheaply than human scientists could alone.
But we may want to tread cautiously: Any system that is powerful and accurate enough to identify drugs that are safe for humans is inherently a system that will also be good at identifying drugs that are incredibly dangerous for humans.
That’s the takeaway from a new paper in Nature Machine Intelligence by Fabio Urbina, Filippa Lentzos, Cédric Invernizzi, and Sean Ekins. They took a machine learning model they’d trained to find non-toxic drugs, and flipped its directive so it’d instead try to find toxic compounds. In less than six hours, the system identified tens of thousands of dangerous compounds, including some very similar to VX nerve gas.
"Dual use" is here, and it’s not going away
Their paper hits on three interests of mine, all of which are essential to keep in mind while reading alarming news like this.
The first is the growing priority of "dual-use" concerns in scientific research. Biology is where some of the most exciting innovation of the 21st century is happening. And continued innovation, especially in broad-spectrum vaccines and treatments, is essential to saving lives and preventing future catastrophes.
But the tools that make DNA faster to sequence and easier to print, or make drug discovery cheaper, or help us easily identify chemical compounds that’ll do exactly what we want, are also tools that make it much cheaper and easier to do appalling harm. That's the "dual-use" problem.
For most of the 20th century, chemical and biological weapons were difficult and expensive to manufacture. For most of the 21st, that won’t be the case. If we don’t invest in managing that transition and ensuring that deadly weapons aren’t easy to obtain or produce, we run the risk that individuals, small terrorist groups, or rogue states could do horrific harm.
AI risk is getting more concrete, and no less scary
AI research increasingly has its own dual-use concerns. Over the last decade, as AI systems have become more powerful, more researchers (though certainly not all of them) have come to believe that humanity will court catastrophe if we build extremely powerful AI systems without taking adequate steps to ensure they do what we want them to do.
Any AI system that is powerful enough to do the things we’re going to want — invent new drugs, plan manufacturing processes, design new machines — is also powerful enough that it could invent deadly toxins, plan manufacturing processes with catastrophic side effects, or design machines that have internal flaws we don’t even understand.
When working with systems that are that powerful, someone, somewhere is going to make a mistake — and point a system at a goal that isn’t compatible with the safety and freedom of everyone on Earth. Turning over more and more of our society to steadily more powerful AI systems, even as we’re aware that we don’t really understand how they work or how to make them do what we want, would be a catastrophic mistake.
But because getting AI systems to align with what we want is really hard — and because their unaligned performance is often good enough, at least in the short term — it’s a mistake we’re actively making.
I think our best and brightest machine-learning researchers should spend some time thinking about this challenge, and look into working at one of the growing number of organizations trying to solve it.
When information is a risk
Let’s say you’ve discovered a way to teach an AI system to develop terrifying chemical weapons. Should you post a paper online describing how you did it? Or keep that information to yourself, knowing it could be misused?
In the world of computer security, there are established procedures for what to do when you discover a security vulnerability. Typically, you report it to the responsible organization (find a vulnerability in Apple computers, you tell Apple) and give them time to fix it before you tell the public. This expectation preserves transparency while also making sure that “good guys” doing work in the computer security space aren’t just feeding “bad guys” a to-do list.
But there’s nothing similar in biology or AI. Virus discovery programs don’t usually keep the more dangerous pathogen they find secret until countermeasures exist. They tend to publish them immediately. When OpenAI slowed its rollout of the text-generating machine GPT-2 because of misuse concerns, they were fiercely criticized and urged to do the more usual thing of publishing all the details.
The team that published the recent Nature Machine Intelligence paper gave a lot of thought to these “information hazard” concerns. The researchers said they were advised by safety experts to withhold some details of how exactly they achieved their result, to make things a little harder for any bad actor looking to follow in their footsteps.
By publishing their paper, they made the risks of emerging technologies a lot more concrete and gave researchers, policymakers and the public a specific reason to pay attention. It was ultimately a way of describing risky technologies in a way that likely reduced risk overall.
Still, it’s deeply unfair to your average biology or AI researcher, who isn’t specialized in information security concerns, to have to make these calls on an ad-hoc basis. National security, AI safety, and biosecurity experts should work together on a transparent framework for handling information risks, so individual researchers can consult experts as part of the publication process instead of trying to figure this out themselves.
—Kelsey Piper
Questions? Comments? Email us at futureperfect@vox.com or find me on Twitter at @kelseytuoc. And if you want to recommend this newsletter to your friends or colleagues, tell them to sign up at vox.com/future-perfect-newsletter.