Deworming: problems under re-analysis

A flawed study on deworming children—and new studies that expose its errors—reveal why activists and philanthropists alike need safeguards.

The book Zen and the Art of Motorcycle Maintenance, of all things, offers a critically important message for people who work in development and philanthropy. “The real purpose of the scientific method is to make sure nature hasn’t misled you into thinking you know something you actually don’t know.”

Three new papers published today confirm this, by illustrating just how easily we can be misled by what we think we know, and just how much the power of the scientific method can safeguard us from continuing to be misled (and potentially investing significant time and effort on the wrong priorities). That’s because the three papers raise important questions about the practice of treating children for intestinal worms, which, in recent years, has become a darling of international development.

Deworming Programs Have Been “In”

Here’s the back-story. Worms infect people through contact with infected faeces. They live in people’s bodies (they can be a metre long!), eat their food, deprive them of nutrients, and make them lethargic and ill. And in 1999, two US economists, conducting a study in Western Kenya, found that “deworming” a number of school children improved their nutritional intake, reduced their incidence of anemia, and—by making them less ill and lethargic—increased their attendance at school and hence improved their exam results. The economists also claimed that attendance at schools where children did not receive treatment also increased—by 7.5 percent—because those children, living in the same area as the children who were treated, were not infected by worm eggs in feces in soil near their homes. (There are two main types of worms: soil-transmitted worms, and water-transmitted worms known as schistosomiasis or bilharzia. The Kenyan study was mainly of soil-transmitted worms but did pick up some schistosomiasis.)

Consequently, the Copenhagen Consensus made deworming one of its top recommendations. GiveWell named two organizations that focus on deworming in the top four on its list. And development economist Michael Kremer, a co-author of the 1999 Kenyan study, started an initiative called Deworm the World, which has treated 37 million children in four countries to date.

The Scientific Method

Now, the scientific method involves several safeguards against being misled. One is isolating variables to reveal which one(s) matter. Maybe the speed with which a dropped object hits the ground depends on the height from which it’s dropped and gender of the person who drops it. So we experiment by having people of both genders drop identical objects from the same height, thus “isolating” gender as a variable and, when the objects hit the ground at the same time, showing that it doesn’t matter.

Another safeguard addresses bias by replicating an experiment elsewhere, and comparing and combining the answers. If we open our back-to-work program only to motivated people, then we don’t know whether their success getting jobs is due to the program (a “treatment effect”) or the unusual characteristics of the people we chose (a “selection effect”). The latter would create a selection bias. If we interview only the people in the program who stick it out to the end, we don’t hear from the people who quit because it was so arduous, so our user-experience data may suffer from survivor bias. These and other biases mislead us into thinking we know things we actually don’t know. Single studies may also be biased because they may unwittingly involve particularly unusual people or take place under unusual circumstances. They may also simply get freak results by chance.

A third safeguard in the scientific method is repeating the analysis. In other words, checking the maths.

The three papers, now available, used the scientific method to great effect. The Cochrane Collaboration is a global network of medical researchers who do “systematic reviews” and “meta-analyses” (it may well have saved your life at some point). In 2012, the Cochrane Collaboration wrote: “It is probably misleading to justify contemporary deworming programmes based on evidence of consistent benefit on nutrition, haemoglobin, school attendance or school performance.” Recent correspondence with the authors implies that they’ve not changed their minds. And today, the Cochrane Collaboration publishes its fourth systematic review of mass deworming. The group looked at all 45 studies within its scope and concluded that: “There is now substantial evidence that this [mass deworming treatment] does not improve nutritional status, haemoglobin, cognition, or school performance.”

In two additional studies published today, the London School of Hygiene and Tropical Medicine (LSHTM) simply re-analyze the Kenyan data. They found, if you excuse the pun, a can of worms: errors, missing data, misinterpretation of probabilities, and a high risk of various biases. The effects are huge: The claimed effect on school attendance among untreated children seems entirely due to “calculation errors” and effectively disappeared on re-analysis; the claimed effect on anemia statistically did the same.

We shouldn’t be surprised: That people make mistakes is hardly news. What’s impressive is that somebody took this important step of re-analyzing the data, caught the errors, and prevented us being misled by them. As Yale’s Dean Karlan and I noted when the 2012 Cochrane worm study published, this is exactly how science is supposed to work.

The re-analysis papers raise three more subtle issues. First, the choice of analytical method matters (even if the data are complete and accurate). When looking at changes in school attendance, the economists used a method common in economics; the epidemiologists used a different method common in epidemiology and found that “the strength of evidence supporting the improvement was dependent on the analysis approach used”. There can only be one “correct” answer, and it’s not yet clear which method is misleading.

Second is how rare re-analyses are. Open data to enable post-publication review is sexy and funded and increasingly common. But actually doing post-publication review is hard. It’s hard to fund—so hats off to3ie who funded this one; it’s hard to do—the original authors sacrificed masses of time digging up old files for LSHTM to use; and it’s hard to get the results published—pre-publication peer review of LSHTM’s papers took about five months.

Third is just how different this is from most impact research in the social sector. This is often unreported, or reported unclearly or incompletely, and only rarely are the raw data made available to enable inspection. I’ve argued before that most charities shouldn’t do impact evaluations (as has Dean, separately)—eradicating misleading biases is just too hard for non-specialists. But when they do, they should publish the full details and data. The scientific method requires it.

This article first published in Stanford Social Innovation Review.

This entry was posted in Effective giving, Great charities, Impact & evaluation, meta-research, transparency and tagged , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s