We use cookies to offer you a better browsing experience, analyze site traffic, personalize content, and serve targeted advertisements. Read about how we use cookies by clicking "Cookie Information." If you continue to use this site, you consent to our use of cookies.
Few health institutions around the world are as renowned as the US Centers for Disease Control and Prevention. Which makes it all the more baffling that the CDC could have fumbled the rollout of coronavirus diagnostic tests throughout the country so badly. While other countries have been able to run millions of tests, the CDC has tested only 1,235 patients. Speed is of the essence when dealing with an epidemic early, and the CDC’s mistakes are already proving costly to tracking the outbreak in the US.
On February 5 the CDC began to send out coronavirus test kits, but many of the kits were soon found to have faulty negative controls (what shows up when coronavirus is absent), caused by contaminated reagents. This was probably a side effect of a rushed job to put the kits together. Labs with failed negative controls had to ship their samples to the CDC itself for testing.
The CDC’s kits are based on PCR testing, which makes millions or billions of copies of a DNA sample so that clinicians can easily identify and study it. PCR is a well established technology that’s been around for 35 years. We’ve improved the process with upgrades such as higher-quality enzymes and reagents, allowing for more precise testing and making it possible to detect targets in real time even while the assay is still running.
So how exactly does the CDC, of all places, goof up something so tried and true?
The first thing to know is that PCR is a very sensitive test. You need extremely clean reagents, and the smallest contaminants can ruin it completely (as happened in this instance). A negative control that detects the wrong viral genome and raises a false positive is practically a worst-case scenario, because it calls into question all the other results in the run—you don’t know if samples are truly positive or if they are positive because of the contamination. “You basically can’t even judge if anything worked,” says Nigel McMillan, the director of infectious diseases and immunology at Griffith University in Australia.
The amplification of DNA in PCR has to be initiated using short strands that are complementary to the target DNA, called primers. Keith Jerome, the head of virology at the University of Washington, points out that “primer design is still somewhat of an art, and not fully predictable.” Even when you have a good database of viral sequences, not all primer sets that look good on a computer will perform well in real life.
These are common problems that can afflict not just PCR, but testing for any virus or any new infectious disease. “For the CDC, however, I’m sure it is unheard of,” says McMillan. “They’re normally very careful about these things.”
Don’t blame PCR itself for the lack of reliable tests, though. According to Duane Newton, the director of clinical microbiology at the University of Michigan, the biggest limitation in diagnostics is not the technology, but rather the regulatory approval process for new tests and platforms. While this process is critical for ensuring safety and efficacy, the necessary delays often “hamper the willingness and ability of manufacturers and laboratories to invest resources into developing and implementing new tests,” he says.
Case in point: FDA rules initially prevented state and commercial labs from developing their own coronavirus diagnostic tests, even if they could develop coronavirus PCR primers on their own. So when the only available test suddenly turned out to be bunk, no one could actually say what primer sets worked.
The CDC and FDA reversed course and lifted this rule on February 29, and commercial and academic labs are now allowed to participate. “Lots of people are working on this, and we’re on the phone all the time with each other comparing notes,” says Jerome. “At least in our hands, it seems that some of the CDC primers work better than others, some of the WHO primer sets look really good, and some from academic groups look great also.”
There’s no particular technical difficulty in designing a PCR test, so most laboratories should be able to do so with confidence. This week, state and commercial labs began testing on their own. We’re already seeing major steps forward; the University of Washington, for instance, has a new diagnostic that will allow it to test 1,500 samples a day. A group in Japan claims to have a test that can detect the virus in just 10 to 30 minutes.
“The great strength the US has always had, not just in virology, is that we’ve always had a wide variety of people and groups working on any given problem,” says Jerome. “When we decided all coronavirus testing had to be done by a single entity, even one as outstanding as CDC, we basically gave away our greatest strength.”
The reagents are now fixed, and the CDC looks ready to move forward. By the end of the week, it expects labs around the country to be able test about 400,000 patients. Other groups around the world are already looking at the crisis as an opportunity to make PCR faster and even develop other viral diagnostics like antibody testing. Local and commercial institutions should be given similar mandates to act decisively, without bizarre constraints. “You’ve got a fantastic resource in the CDC,” says McMillan. “But if they’re not proactive enough or timely enough, then things down the road will start to fall.”
A new survey of 1,000 AI leaders reveals the top AI use cases, the challenges of building scale, and the potential benefits of data sharing for business and society.
A new reinforcement-learning algorithm has learned to optimize the placement of components on a computer chip to make it more efficient and less power-hungry....
3D Tetris: Chip placement, also known as chip floor planning, is a complex three-dimensional design problem. It requires the careful configuration of hundreds, sometimes thousands, of components across multiple layers in a constrained area. Traditionally, engineers will manually design configurations that minimize the amount of wire used between components as a proxy for efficiency. They then use electronic design automation software to simulate and verify their performance, which can take up to 30 hours for a single floor plan.
Time lag: Because of the time investment put into each chip design, chips are traditionally supposed to last between two and five years. But as machine-learning algorithms have rapidly advanced, the need for new chip architectures has also accelerated. In recent years, several algorithms for optimizing chip floor planning have sought to speed up the design process, but they’ve been limited in their ability to optimize across multiple goals, including the chip’s power draw, computational performance, and area.
Intelligent design: In response to these challenges, Google researchers Anna Goldie and Azalia Mirhoseini took a new approach: reinforcement learning. Reinforcement-learning algorithms use positive and negative feedback to learn complicated tasks. So the researchers designed what’s known as a “reward function” to punish and reward the algorithm according to the performance of its designs. The algorithm then produced tens to hundreds of thousands of new designs, each within a fraction of a second, and evaluated them using the reward function. Over time, it converged on a final strategy for placing chip components in an optimal way.
Validation: After checking the designs with the electronic design automation software, the researchers found that many of the algorithm’s floor plans performed better than those designed by human engineers. It also taught its human counterparts some new tricks, the researchers said.
Production line: Throughout the field's history, progress in AI has been tightly interlinked with progress in chip design. The hope is this algorithm will speed up the chip design process and lead to a new generation of improved architectures, in turn accelerating AI advancement.
To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It's free.
A proposal for a new kind of government-run payment platform didn’t make it into the US Congress’s coronavirus relief plan. But it won’t be the last we hear of the idea.
In this episode of Radio Corona, we ask what the coronavirus pandemic means for efforts to combat climate change. We will also be taking your questions on this subject. James Temple, MIT Technology Review’s senior editor for energy, will be joined by Jane Flegal, program officer with the William and Flora Hewlett Foundation’s Environment Program; Gernot Wagner, clinical associate professor at New York University's Department of Environmental Studies and co-author of “Climate Shock;” and Costa Samaras, associate professor of civil and environmental engineering at Carnegie Mellon.