clock menu more-arrow no yes mobile

Filed under:

How to test and trace a virus we haven’t discovered yet

We have the technology to ensure the next outbreak doesn’t become another Covid-19.

Christina Animashaun/Vox
Dylan Matthews is a senior correspondent and head writer for Vox's Future Perfect section and has worked at Vox since 2014. He is particularly interested in global health and pandemic prevention, anti-poverty efforts, economic policy and theory, and conflicts about the right way to do philanthropy.

Part of Pandemic-Proof, Future Perfect’s series on the upgrades we can make to prepare for the next pandemic.

Early in the pandemic, it became clear that to get it under control, we needed to be testing a lot. If enough tests are available, you can identify and isolate infected individuals, which can dramatically reduce spread. And you can do it without the need for extensive contact tracing or quarantine measures for those who’ve been exposed but don’t know if they’re sick, which in turn minimizes the social and economic fallout of a pandemic.

That was what South Korea did from early in the Covid-19 pandemic, flattening the curve and keeping case counts low primarily through relentless testing and case investigation. And the US could have done the same. It could have immediately approved PCR and rapid antigen tests for the pathogen in a matter of days, and ensured that they were produced en masse and readily available.

The Us could have started sending out home rapid tests in February 2020, within a month of US officials becoming aware of the initial outbreak. It could have gone a step beyond South Korea, too: It could have had hospitals using genomic sequencers to detect the SARS-CoV-2 virus in patients, which could have set off an early alarm that the disease had reached the US. The country could have moved fast — and speed is everything in an exponentially growing pandemic.

The US did not take those steps, and those who live there have paid the price. But the US can and must do better next time. A key part of any pandemic prevention strategy is a renewed commitment to testing: both in-hospital genomic surveillance, which can help catch and smother a potential pandemic pathogen as it emerges, and cheap antigen testing, which can help manage a pandemic once it’s arrived (or even stop it if caught early enough).

Keeping that commitment requires more money. It also requires a new regulatory approach from government that allows manufacturers to move much more quickly.

Genomic testing can identify emerging pandemics very early on

When people talk about “testing” in the context of Covid-19, they usually mean diagnostic testing: nasal and throat swabs, PCR and antigen tests. But those diagnostics take time to develop, and we need to be tracking disease outbreaks earlier on. Luckily, with current technology, we can begin tracking a potentially dangerous virus when it first starts spreading. That’s the promise of genomic sequencing.

Clinical diagnostics are generally used to identify illness in specific people. That is, of course, important in a pandemic. But we also want to track where the virus is moving, especially if, like Covid-19, it can infect some people without making them clearly sick.

How, then, do we track a new virus, one that could cause a pandemic precisely because it’s unfamiliar to us? The short answer: genomic sequencing. Human blood, mucous, and other bodily fluids contain countless microorganisms, each with a distinct genetic code that genomic sequencing machines can identify. We can rapidly sequence the DNA and RNA in human or even environmental samples and screen for unknown viruses.

We already know the genomes to common existing viruses; we should expect those to show up with some frequency in such genomic screening. But when something new surfaces that doesn’t match these sequences we already know, the information could tell us a novel pathogen is spreading.

Genomic sequencing is “a universal pathogen detector,” as Jacob Swett, co-founder of altLabs, a nonprofit that works to develop biosecurity technologies, told me. If you’re analyzing virus DNA/RNA, it doesn’t matter what you’re looking for; your sequencer can identify and decode the genetic material in a sample. You can then compare to the sequences of known viruses like SARS-CoV-2 or influenza or measles.

If it doesn’t match anything, you might have a new disease on the loose. Checking for partial matches can also tell you what viruses the new virus is related to, and even full matches can tell you that an existing disease is spreading much more rapidly.

It’s easy to see how this technology can help detect the next deadly virus before it becomes a pandemic — especially because sequencing technology has gotten much cheaper in recent years. It’s cheap enough that it’s feasible to deploy it in a few key locations, which would massively improve our ability to identify and track an outbreak early on.

The scientist David Ecker outlined one approach in a Scientific American piece: deploying genomic sequencers to hospitals for routine diagnostic use. Covering just 200 large, strategically chosen hospitals, he notes, would enable surveillance of 30 percent of all ER visits in the US.

That may not sound like a lot, but if only seven symptomatic patients stricken with a new pathogen seek care at one of these emergency rooms, and undergo genomic sequencing of their viral loads, Ecker estimates there’s a 95 percent chance of detecting that outbreak. From just seven patients! That could save a vast amount of time compared to physicians attempting to diagnose based on symptoms and existing laboratory tests.

Ecker notes that during the 2001 anthrax attacks, only six of 164 doctors reviewing the cases even suspected anthrax. The most common symptoms of Covid-19 can seem little different from other respiratory illnesses, especially in the earliest stages. Humans are fallible; genomic sequencing is a lot less so.

To see how important this could be, imagine this strategy applied to Covid-19 in January 2020. By then there were likely already a few cases in the US. If a handful of them had presented at hospitals that were a part of this surveillance system, then national leaders would have realized that a novel virus had surfaced. They could have moved to isolate positive cases, build dedicated tests for this new pathogen, and used GPS data from the phones of positive cases to track down asymptomatic cases. Such a response might have halted the pandemic before it got started.

But for that kind of upgrade to be possible, we need to rethink how our regulatory agencies treat testing in an emergency.

We need a better regulatory approach and more funding for tests

In early 2020, as rumblings of a novel virus spreading in Wuhan began to stir, we had genuinely no idea how much SARS-CoV-2 was spreading around the United States. We lacked sufficient genomic sequencing capacity — but we also had little to no capacity to do formal diagnostic testing. The CDC was requiring all samples be sent to their lab in Atlanta as late as February, delaying tests significantly, and if that wasn’t bad enough, the agency contaminated the first round of tests it sent to laboratories, meaning many early attempts to test were literally useless.

So one of the first indications that Covid-19 was spreading inside the United States at concerning rates came from the Seattle Flu Study.

The study, which launched in 2018 and was meant to track flu transmission in the city, was already collecting samples of respiratory pathogens from patients in Washington, one of the first states hit by Covid-19. What they had was a potentially valuable resource: samples from Washingtonians that could be tested for the new coronavirus.

But the research group ran into some regulatory roadblocks. After a national public health emergency was declared on January 31, 2020, the FDA announced that, contrary to prior policy, all Covid tests would require an emergency-use authorization from the agency. Previously certified laboratories were only allowed to offer tests if they met set requirements.

Frustrated by the regulatory dawdling, the researchers behind the Seattle Flu Study began testing samples on February 25 without government approval, and documented the first confirmed instance of community transmission inside the United States. On March 9, Washington state regulators told them to stop all testing. They began testing again for a period until May — only for the FDA to stop them again because they lacked an emergency authorization.

There’s definitely an argument to be made for the FDA to police bad tests. If it doesn’t, said Daniel Carpenter, a professor of government at Harvard and a leading expert on the FDA, “the market might be flooded with bad tests. Consumers would be either too confident in the tests they took (lots of false positives and false negatives), or they would lose confidence in the market.” But this was arguably a case where the FDA was policing too aggressively.

“The CDC kept saying they were the only ones who could test,” Lea Starita, an assistant professor of genome sciences at UW and a researcher on the flu study, recalled to me. “That’s ridiculous. I could teach you how to do a test in an hour and a half.”

Starita’s frustration is widespread among pandemic preparedness experts. In lieu of a vaccine, mass testing is the best tool we have to contain a pandemic’s spread. Cheap, rapid, and self-administered antigen tests (as well as more precise but also much slower PCR tests) can tell public health officials where the spread is greatest; it can enable infected people to learn they’re infected quickly so they can quarantine and protect others — something that is especially important when a virus can spread asymptomatically, as SARS-CoV-2 can.

But testing in the US has not been abundant or widely accessible for most of this pandemic. The shortfall had especially ill effects in the summer of 2020 as the disease spread nationwide. After an initial surge in the Northeast, the disease began to gain ground across the country, and sufficient testing could have allowed the US to follow a South Korea-style “test, trace, and isolate” approach, in which positive cases are identified rapidly and quickly isolated so spread is contained. Without sufficient testing, rapid spread continued. Even as late as this winter, omicron seriously strained our supply of tests.

“Our original sin in this pandemic was not doing the same thing we did for vaccines for testing,” Jay Varma, a professor at Weill Cornell Medicine and former public health adviser to NYC Mayor Bill de Blasio, told me.

What the US government did for vaccines, as part of Operation Warp Speed (OWS), was make “advanced market commitments”: promises, backed by many billions in funding, that the government would purchase huge quantities of vaccines once they’re approved. But while OWS supposedly included improved diagnostics as a priority, in practice it almost solely focused on vaccines.

The National Institutes of Health (NIH) launched a modest program called RADx, which provides funding for SARS-CoV-2 test developers as well as to testing efforts in underserved populations and to more experimental testing approaches. But the money involved is small — about $1.5 billion to OWS’s $18 billion — and, crucially, RADx does not provide advanced commitments guaranteeing a market for tests once approved.

OWS-funded work on vaccines was, of course, vital. But testing is vital too. Advanced market commitments promising the purchase of several billion at-home rapid tests could have vastly eased the shortage throughout the pandemic. During the next pandemic, making such commitments early will be crucial.

Emergencies are no time for red tape

But beyond investment, the FDA needs to do what doesn’t come naturally to a regulatory agency: take a laxer view of tests to identify infections.

Right now, the FDA assesses rapid antigen or “lateral flow” tests by comparing their results to those of much slower, more expensive, and harder-to-conduct lab-based PCR tests. If the rapid tests align closely enough with the PCR tests for the FDA’s comfort — currently the agency requires a “positive percent agreement” with PCR tests of 80 percent, down from an earlier threshold of 90 percent — they can be approved. If not, not.

This approach helps explain why the FDA spent some of 2020 denying approval to perfectly adequate antigen tests, for lacking a track record of clinical use where results aligned with those of PCR tests. That kind of data, from actual patients who’ve been infected, is time-consuming and costly to collect — and also requires that a virus be spreading in the real world. It requires a full clinical trial with actual patients, and the results vary based not just on how useful the test is, but on the underlying dynamics of the disease.

For instance, PCR tests can pick up “dead” virus for up to three months, while rapid tests will only detect higher viral loads, which better correlate with infectiousness. If rapid tests have a higher positive percent agreement with PCR tests because they also pick up dead virus, that can be a bad thing!

The evaluation process consumes valuable time without necessarily providing better answers. What you want in a test is a given level of sensitivity to the presence of the pathogen you’re aiming to detect.

But Nikki Teran, the senior biosecurity fellow at the Institute for Progress, suggests that diagnostics meant for public health purposes should not require the same level of clinical testing during an emergency. Whether or not a test is sufficiently sensitive to inform decisions about isolating is a fact that can be verified in a lab, and doesn’t require a lengthy clinical trial with real patients.

Teran proposes setting a required level of sensitivity to the virus in question (a level of sensitivity that’s related to how much virus is necessary for a person to be sick or infectious) and having test developers contract to third parties who can confirm in the lab that their test meets that standard. This would vastly speed up approval for tests and avoid the kind of delays the Seattle Flu Study experienced.

But money matters too. Carpenter, the Harvard professor, argues that making sure the agency is adequately resourced to review tests is crucial. “What we need most are two things: 1) funding for the test regulation outfit at FDA that rivals that of the drug-reviewing division, and 2) clearer guidance on what passes the bar,” he wrote in an email.

If America’s massive investment in vaccine research and production in 2020 was its biggest Covid-era triumph, its failure to do the same for testing was the nation’s biggest mistake. A major investment now in genomic sequencing, and a better funding and regulatory framework for diagnostic tests in a future pandemic, could prevent the US from repeating that error.

Future Perfect

So you’ve found research fraud. Now what?

Future Perfect

Is Fallout a warning for our future? A global catastrophic risk expert weighs in.

Future Perfect

Bird flu in milk is alarming — but not for the reason you think

View all stories in Future Perfect

Sign up for the newsletter Today, Explained

Understand the world with a daily explainer plus the most compelling stories of the day.