Archive for the ‘epidemiology’Category

Ebola and health workers

It starts with familiar flu-like symptoms: a mild fever, headache, muscle and joint pains.

But within days this can quickly descend into something more exotic and frightening: vomiting and diarrhoea, followed by bleeding from the gums, the nose and gastrointestinal tract.

Death comes in the form of either organ failure or low blood pressure caused by the extreme loss of fluids.

Such fear-inducing descriptions have been doing the rounds in the media lately.

However, this is not Ebola but rather Dengue Shock Syndrome, an extreme form of dengue fever, a mosquito-borne disease that struggles to make the news.

That’s Seth Berkley, CEO of the GAVI Alliance, writing an opinion piece for the BBC. Berkley argues that Ebola grabs headlines not because it is particularly infectious or deadly, but because those of us from wealthy countries have otherwise forgotten what it’s like to be confronted with a disease we do not know how to or cannot afford to treat.

However, in wealthy countries, thanks to the availability of modern medicines, many of these diseases can now usually be treated or cured, and thanks to vaccines they rarely have to be. Because of this blessing we have simply forgotten what it is like to live under threat of such infectious and deadly diseases, and forgotten what it means to fear them.

Ebola does combine infectiousness and rapid lethality, even with treatment, in a way that few diseases do, and it’s been uniquely exoticized by books like the Hot Zone. But as Berkley and many others have pointed out, the fear isn’t really justified in wealthy countries. They have health systems that can effectively contain Ebola cases if they arrive — which I’d guess is more likely than not. So please ignore the sensationalism on CNN and elsewhere. (See for example Tara Smith on other cases when hemorraghic fevers were imported into the US and contained.)

But one way that Ebola is different — in degree if not in kind — to the other diseases Berkley cites (dengue, measles, childhood diseases) is that its outbreaks are both symptomatic of weak health systems and then extremely destructive to the fragile health systems that were least able to cope with it in the first place.

Like the proverbial canary in the coal mine, an Ebola outbreak reveals underlying weaknesses in health systems. Shelby Grossman highlights this article from Africa Confidential:

MSF set up an emergency clinic in Kailahun [Sierra Leone] in June but several nurses had already died in Kenema. By early July, over a dozen health workers, nurses and drivers in Kenema had contracted Ebola and five nurses had died. They had not been properly equipped with biohazard gear of whole-body suit, a hood with an opening for the eyes, safety goggles, a breathing mask over the mouth and nose, nitrile gloves and rubber boots.

On 21 July, the remaining nurses went on strike. They had been working twelve-hour days, in biohazard suits at high temperatures in a hospital mostly without air conditioning. The government had promised them an extra US$30 a week in danger money but despite complaints, no payment was made. Worse yet, on 17 June, the inexperienced Health and Sanitation Minister, Miatta Kargbo, told Parliament that some of the nurses who had died in Kenema had contracted Ebola through promiscuous sexual activity.

Only one nurse showed up for work on 22 July, we hear, with more than 30 Ebola patients in the hospital. Visitors to the ward reported finding a mess of vomit, splattered blood and urine. Two days later, Khan, who was leading the Ebola fight at the hospital and now with very few nurses, tested positive. The 43-year-old was credited with treating more than 100 patients. He died in Kailahun at the MSF clinic on 29 July…

In addition to the tragic loss of life, there’s also the matter of distrust of health facilities that will last long after the epidemic is contained. Here’s Adam Nossiter, writing for the NYT on the state of that same hospital in Kenema as of two days ago:

The surviving hospital workers feel the stigma of the hospital acutely.

“Unfortunately, people are not coming, because they are afraid,” said Halimatu Vangahun, the head matron at the hospital and a survivor of the deadly wave that decimated her nursing staff. She knew, all throughout the preceding months, that one of her nurses had died whenever a crowd gathered around her office in the mornings.

There’s much to read on the current outbreak — see also this article by Denise Grady and Sheri Fink (one of my favorite authors) on tracing the index patient (first case) back to a child who died in December 2013. One of the saddest things I’ve read about previous Ebola outbreaks is this profile of Dr. Matthew Lukwiya, a physician who died fighting Ebola in Uganda.

The current outbreak is different in terms of scale and its having reached urban areas, but if you read through these brief descriptions of past Ebola outbreaks (via Wikipedia) you’ll quickly see that the transmission to health workers at hospitals is far too typical. Early transmission seems to be amplified by health facilities that weren’t properly equipped to handle the disease. (See also this article article (PDF) on a 1976 outbreak.) The community and the brave health workers responding to the epidemic then pay the price.

Ebola’s toll on health workers is particularly harsh given that the affected countries are starting with an incredible deficit. I was recently looking up WHO statistics on health worker density, and it struck me that the three countries at the center of the current Ebola outbreak are all close to the very bottom of rankings by health worker density. Here’s the most recent figures for the ratio of physicians and nurses to the population of each country:* 

Liberia has already lost three physicians to Ebola, which is especially tragic given that there are so few Liberian physicians to begin with: somewhere around 60 (in 2008). The equivalent health systems impact in the United States would be something like losing 40,000 physicians in a single outbreak.

After the initial emergency response subsides — which will now be on an unprecedented scale and for an unprecedented length of time — I hope donors will make the massive investments in health worker training and systems strengthening that these countries needed prior to the epidemic. More and better trained and equipped health workers will save lives otherwise lost to all the other infectious diseases Berkley mentioned in the article linked above, but they will also stave off future outbreaks of Ebola or new diseases yet unknown. And greater investments in health systems years ago would have been a much less costly way — in terms of money and lives — to limit the damage of the current outbreak.  

(*Note on data: this is quick-and-dirty, just to illustrate the scale of the problem. Ie, ideally you’d use more recent data, compare health worker numbers with population numbers from the same year, and note data quality issues surrounding counts of health workers)

(Disclaimer: I’ve remotely supported some of CHAI’s work on health systems in Liberia, but these are my personal views.)

Typhoid counterfactuals

An acquaintance (who doesn’t work in public health) recently got typhoid while traveling. She noted that she had had the typhoid vaccine less than a year ago but got sick anyway. Surprisingly to me, even though she knew “the vaccine was only about 50% effective” she now felt that it was  a mistake to have gotten the vaccine. Why? “If you’re going to get the vaccine and still get typhoid, what’s the point?”

I disagreed but am afraid my defense wasn’t particularly eloquent in the moment: I tried to say that, well, if it’s 50% effective and you and, I both got the vaccine, then only one of us would get typhoid instead of both of us. That’s better, right? You just drew the short straw. Or, if you would have otherwise gotten typhoid twice, now you’ll only get it once!

These answers weren’t reassuring in part because thinking counterfactually — what I was trying to do — isn’t always easy. Epidemiologists do this because they’re typically told ad nauseum to approach causal questions by first thinking “how could I observe the counterfactual?” At one point after finishing my epidemiology coursework I started writing a post called “The Top 10 Things You’ll Learn in Public Health Grad School” and three or four of the ten were going to be “think counterfactually!”

A particularly artificial and clean way of observing this difference — between what happened and what could have otherwise happened — is to randomly assign people to two groups (say, vaccine and placebo). If the groups are big enough to average out any differences between them, then the differences in sickness you observe are due to the vaccine. It’s more complicated in practice, but that’s where we get numbers like the efficacy of the typhoid vaccine — which is actually a bit higher than 50%.

You can probably see where this is going: while the randomized trial gives you the average effect, for any given individual in the trial they might or might not get sick. Then, because any individual is assigned only to the treatment or control, it’s hard to pin their outcome (sick vs. not sick) on that alone. It’s often impossible to get an exhaustive picture of individual risk factors and exposures so as to explain exactly which individuals will get sick or not in advance. All you get is an average, and while the average effect is really, really important, it’s not everything.

This is related somewhat to Andrew Gelman’s recent distinction between forward and reverse causal questions, which he defines as follows:

1. Forward causal inference. What might happen if we do X? What are the effects of smoking on health, the effects of schooling on knowledge, the effect of campaigns on election outcomes, and so forth?

2. Reverse causal inference. What causes Y? Why do more attractive people earn more money? Why do many poor people vote for Republicans and rich people vote for Democrats? Why did the economy collapse?

The randomized trial tries to give us an estimate of the forward causal question. But for someone who already got sick, the reverse causal question is primary, and the answer that “you were 50% less likely to have gotten sick” is hard to internalize. As Gelman says:

But reverse causal questions are important too. They’re a natural way to think (consider the importance of the word “Why”) and are arguably more important than forward questions. In many ways, it is the reverse causal questions that lead to the experiments and observational studies that we use to answer the forward questions.

The moral of the story — other than not sharing your disease history with a causal inference buff — is that reconciling the quantitative, average answers we get from the forward questions with the individual experience won’t always be intuitive.

17

07 2013

First responses to DEVTA roll in

In my last post I highlighted the findings from the DEVTA trial of deworming in Vitamin A in India, noting that the Vitamin A results would be more controversial. I said I expected commentaries over the coming months, but we didn’t have to wait that long after all.

First is a BBC Health Check program features a discussion of DEVTA with Richard Peto, one of the study’s authors. It’s for a general audience so it doesn’t get very technical, and because of that it really grated when they described this as a “clinical trial,” as that has certain connotations of rigor that aren’t reflected in the design of the study. If DEVTA is a clinical trial, then so was

Peto also says there were two reasons for the massive delay in publishing the trial, 1) time to check things and “get it straight,” and 2) that they were ” afraid of putting up a trial with a false negative.” [An aside for those interested in publication bias issues: can you imagine an author with strong positive findings ever saying the same thing about avoiding false positives?!]

Peto ends by sounding fairly neutral re: Vitamin A (portraying himself in a middle position between advocates in favor and skeptics opposed) but acknowledges that with their meta-analysis results Vitamin A is still “cost-effective by many criteria.”

Second is a commentary in The Lancet by Al Sommers, Keith West, and Reynaldo Martorell. A little history: Sommers ran the first big Vitamin A trials in Sumtra (published in 1986) and is the former dean of the Johns Hopkins School of Public Health.  (Sommers’ long-term friendship with Michael Bloomberg, who went to Hopkins as an undergrad, is also one reason the latter is so big on public health.) For more background, here’s a recent JHU story on Sommers’ receiving a $1 million research prize in part for his work on Vitamin A.

Part of their commentary is excerpted below, with my highlights in bold:

But this was neither a rigorously conducted nor acceptably executed efficacy trial: children were not enumerated, consented, formally enrolled, or carefully followed up for vital events, which is the reason there is no CONSORT diagram. Coverage was ascertained from logbooks of overworked government community workers (anganwadi workers), and verified by a small number of supervisors who periodically visited randomly selected anganwadi workers to question and examine children who these workers gathered for them. Both anganwadi worker self-reports, and the validation procedures, are fraught with potential bias that would inflate the actual coverage.

To achieve 96% coverage in Uttar Pradesh in children found in the anganwadi workers’ registries would have been an astonishing feat; covering 72% of children not found in the anganwadi workers’ registries seems even more improbable. In 2005—06, shortly after DEVTA ended, only 6·1% of children aged 6—59 months in Uttar Pradesh were reported to have received a vitamin A supplement in the previous 6 months according to results from the National Family Health Survey, a national household survey representative at national and state level…. Thus, it is hard to understand how DEVTA ramped up coverage to extremely high levels (and if it did, why so little of this effort was sustained). DEVTA provided the anganwadi workers with less than half a day’s training and minimal if any incentive.

They also note that the study funding was minimalist compared to more rigorous studies, which may be an indication of quality. And as an indication that there will almost certainly be alternative meta-analyses that weight the different studies differently:

We are also concerned that Awasthi and colleagues included the results from this study, which is really a programme evaluation, in a meta-analysis in which all of the positive studies were rigorously designed and conducted efficacy trials and thus represented a much higher level of evidence. Compounding the problem, Awasthi and colleagues used a fixed-effects analytical model, which dramatically overweights the results of their negative findings from a single population setting. The size of a study says nothing about the quality of its data or the generalisability of its findings.

I’m sure there will be more commentaries to follow. In my previous post I noted that I’m still trying to wrap my head around the findings, and I think that’s still right. If I had time I’d dig into this a bit more, especially the relationship with the Indian National Family Health Survey. But for now I think it’s safe to say that two parsimonious explanations for how to reconcile DEVTA with the prior research are emerging:

1. DEVTA wasn’t all that rigorous and thus never achieved the high population coverage levels necessary to have a strong mortality impact; the mortality impact was attenuated by poor coverage, resulting in the lack of a statistically significant effect in line with prior results. Thus is shouldn’t move our priors all that much. (Sommers et al. seem to be arguing for this.) Or,

2. There’s some underlying change in the populations between the older studies and these newer studies that causes the effect of Vitamin A to decline — this could be nutrition, vaccination status, shifting causes of mortality, etc. If you believe this, then you might discount studies because they’re older.

(h/t to @karengrepin for the Lancet commentary.)

25

03 2013

A massive trial, a huge publication delay, and enormous questions

It’s been called the “largest clinical* trial ever”: DEVTA (Deworming and Enhanced ViTamin A supplementation), a study of Vitamin A supplementation and deworming in over 2 million children in India, just published its results. “DEVTA” may mean “deity” or “divine being” in Hindi but some global health experts and advocates will probably think these results come straight from the devil. Why? Because they call into question — or at least attenuate — our estimates of the effectiveness of some of the easiest, best “bang for the buck” interventions out there.

Data collection was completed in 2006, but the results were just published in The Lancet. Why the massive delay? According to the accompany discussion paper, it sounds like the delay was rooted in very strong resistance to the results after preliminary outcomes were presented at a conference in 2007. If it weren’t for the repeated and very public shaming by the authors of recent Cochrane Collaboration reviews, we might not have the results even today. (Bravo again, Cochrane.)

So, about DEVTA. In short, this was a randomized 2×2 factorial trial, like so:

The results were published as two separate papers, one on Vitamin A and one on deworming, with an additional commentary piece:

The controversy is going to be more about what this trial didn’t find, rather than what they did: the confidence interval on the Vitamin A study’s mortality estimate (mortality ratio 0.96, 95% confidence interval of 0.89 to 1.03) is consistent with a mortality reduction as large as 11%, or as much as a 3% increase. The consensus from previous Vitamin A studies was mortality reductions of 20-30%, so this is a big surprise. Here’s the abstract to that paper:

Background

In north India, vitamin A deficiency (retinol <0·70 μmol/L) is common in pre-school children and 2–3% die at ages 1·0–6·0 years. We aimed to assess whether periodic vitamin A supplementation could reduce this mortality.

Methods

Participants in this cluster-randomised trial were pre-school children in the defined catchment areas of 8338 state-staffed village child-care centres (under-5 population 1 million) in 72 administrative blocks. Groups of four neighbouring blocks (clusters) were cluster-randomly allocated in Oxford, UK, between 6-monthly vitamin A (retinol capsule of 200 000 IU retinyl acetate in oil, to be cut and dripped into the child’s mouth every 6 months), albendazole (400 mg tablet every 6 months), both, or neither (open control). Analyses of retinol effects are by block (36 vs36 clusters).

The study spanned 5 calendar years, with 11 6-monthly mass-treatment days for all children then aged 6–72 months.  Annually, one centre per block was randomly selected and visited by a study team 1–5 months after any trial vitamin A to sample blood (for retinol assay, technically reliable only after mid-study), examine eyes, and interview caregivers. Separately, all 8338 centres were visited every 6 months to monitor pre-school deaths (100 000 visits, 25 000 deaths at ages 1·0–6·0 years [the primary outcome]). This trial is registered at ClinicalTrials.gov, NCT00222547.

Findings

Estimated compliance with 6-monthly retinol supplements was 86%. Among 2581 versus 2584 children surveyed during the second half of the study, mean plasma retinol was one-sixth higher (0·72 [SE 0·01] vs 0·62 [0·01] μmol/L, increase 0·10 [SE 0·01] μmol/L) and the prevalence of severe deficiency was halved (retinol <0·35 μmol/L 6% vs13%, decrease 7% [SE 1%]), as was that of Bitot’s spots (1·4% vs3·5%, decrease 2·1% [SE 0·7%]).

Comparing the 36 retinol-allocated versus 36 control blocks in analyses of the primary outcome, deaths per child-care centre at ages 1·0–6·0 years during the 5-year study were 3·01 retinol versus 3·15 control (absolute reduction 0·14 [SE 0·11], mortality ratio 0·96, 95% CI 0·89–1·03, p=0·22), suggesting absolute risks of death between ages 1·0 and 6·0 years of approximately 2·5% retinol versus 2·6% control. No specific cause of death was significantly affected.

Interpretation

DEVTA contradicts the expectation from other trials that vitamin A supplementation would reduce child mortality by 20–30%, but cannot rule out some more modest effect. Meta-analysis of DEVTA plus eight previous randomised trials of supplementation (in various different populations) yielded a weighted average mortality reduction of 11% (95% CI 5–16, p=0·00015), reliably contradicting the hypothesis of no effect.

Note that instead of just publishing these no-effect results and leaving the meta-analysis to a separate publication, the authors go ahead and do their own meta-analysis of DEVTA plus previous studies and report that — much attenuated, but still positive — effect in their conclusion. I think that’s a fair approach, but also reveals that the study’s authors very much believe there are large Vitamin A mortality effects despite the outcome of their own study!

[The only media coverage I’ve seen of these results so far comes from the Times of India, which includes quotes from the authors and Abhijit Banerjee.]

To be honest, I don’t know what to make of the inconsistency between these findings and previous studies, and am writing this post in part to see what discussion it generates. I imagine there will be more commentaries on these findings over the coming months, with some decrying the results and methodologies and others seeing vindication in them. In my view the best possible outcome is an ongoing concern for issues of external validity in biomedical trials.

What do I mean? Epidemiologists tend to think that external validity is less of an issue in randomized trials of biomedical interventions — as opposed to behavioral, social, or organizational trials — but this isn’t necessarily the case. Trials of vaccine efficacy have shown quite different efficacy for the same vaccine (see BCG and rotavirus) in different locations, possibly due to differing underlying nutritional status or disease burdens. Our ability to interpret discrepant findings can only be as sophisticated as the available data allows, or as sophisticated as allowed by our understanding of the biological and epidemiologic mechanisms that matter on the pathway from intervention to outcome. We can’t go back in time and collect additional information (think nutrition, immune response, baseline mortality, and so forth) on studies far in the past, but we can keep such issues in mind when designing trials moving forward.

All that to say, these results are confusing, and I look forward to seeing the global health community sort through them. Also, while the outcomes here (health outcomes) are different from those in the Kremer deworming study (education outcomes), I’ve argued before that lack of effect or small effects on the health side should certainly influence our judgment of the potential education outcomes of deworming.

*I think given the design it’s not that helpful to call this a ‘clinical’ trial at all – but that’s another story.

20

03 2013

On deworming

GiveWell’s Alexander Berger just posted a more in-depth blog review of the (hugely impactful) Miguel and Kremer deworming study. Here’s some background: the Cochrane reviewGivewell’s first response to it, and IPA’s very critical response.

I’ve been meaning to blog on this since the new Cochrane review came out, but haven’t had time to do the subject justice by really digging into all the papers. So I hope you’ll forgive me for just sharing the comment I left at the latest GiveWell post, as it’s basically what I was going to blog anyway:

Thanks for this interesting review — I especially appreciate that the authors [Miguel and Kremer] shared the material necessary for you [GiveWell] to examine their results in more depth, and that you talk through your thought process.

However, one thing you highlighted in your post on the new Cochrane review that isn’t mentioned here, and which I thought was much more important than the doubts about this Miguel and Kremer study, was that there have been so many other studies that did not find large effect on health outcomes! I’ve been meaning to write a long blog post about this when I really have time to dig into the references, but since I’m mid-thesis I’ll disclaim that this quick comment is based on recollection of the Cochrane review and your and IPA’s previous blog posts, so forgive me if I misremember something.

The Miguel and Kremer study gets a lot of attention in part because it had big effects, and in part because it measured outcomes that many (most?) other deworming studies hadn’t measured — but it’s not as if we believe these outcomes to be completely unrelated. This is a case where what we believe the underlying causal mechanism for the social effects to be is hugely important. For the epidemiologists reading, imagine this as a DAG (a directed acyclic graph) where the mechanism is “deworming -> better health -> better school attendance and cognitive function -> long-term social/economic outcomes.” That’s at least how I assume the mechanism is hypothesized.

So while the other studies don’t measure the social outcomes, it’s harder for me to imagine how deworming could have a very large effect on school and social/economic outcomes without first having an effect on (some) health outcomes — since the social outcomes are ‘downstream’ from the health ones. Maybe different people are assuming that something else is going on — that the health and social outcomes are somehow independent, or that you just can’t measure the health outcomes as easily as the social ones, which seems backwards to me. (To me this was the missing gap in the IPA blog response to GiveWell’s criticism as well.)

So continuing to give so much attention to this study, even if it’s critical, misses what I took to be the biggest takeaway from that review — there have been a bunch of studies that showed only small effects or none at all. They were looking at health outcomes, yes, but those aren’t unrelated to the long-term development, social, and economic effects. You [GiveWell] try to get at the external validity of this study by looking for different size effects in areas with different prevalence, which is good but limited. Ultimately, if you consider all of the studies that looked at various outcomes, I think the most plausible explanation for how you could get huge (social) effects in the Miguel Kremer study while seeing little to no (health) effects in the others is not that the other studies just didn’t measure the social effects, but that the Miguel Kremer study’s external validity is questionable because of its unique study population.

(Emphasis added throughout)

 

Still #1

Pop quiz: what’s the leading killer of children under five?

Before I answer, some background: my impression is that many if not most public health students and professionals don’t really get politics. And specifically, they don’t get how an issue being unsexy or just boring politics can results in lousy public policy. I was discussing this shortcoming recently over dinner in Addis with someone who used to work in public health but wasn’t formally trained in it. I observed, and they concurred, that students who go to public health schools (or at least Hopkins, where this shortcoming may be more pronounced) are mostly there to get technical training so that they can work within the public health industry, and that more politically astute students probably go for some other sort of graduate training, rather than concentrating on epidemiology or the like.

The end result is that you get cadres of folks with lots of knowledge about relative disease burden and how to implement disease control programs, but who don’t really get why that knowledge isn’t acted upon. On the other hand, a lot of the more politically savvy folks who are in a position to, say, set the relative priority of diseases in global health programming — may not know much about the diseases themselves. Or, maybe more likely, they do the best job they can to get the most money possible for programs that are both good for public health and politically popular.  But if not all diseases are equally “popular” this can result in skewed policy priorities.

Now, the answer to that pop quiz: the leading killer of kids under 5 is…. [drumroll]…  pneumonia!

If you already knew the answer to that question, I bet you either a) have public health training, or b) learned it due to recent, concerted efforts to raise pneumonia’s public profile. On this blog the former is probably true (after all I have a post category called “methodological quibbles“), but today I want to highlight the latter efforts.

To date, most of the political class and policymakers get the pop quiz wrong, and badly so. At Hopkins’ school of public health I took and enjoyed Orin Levine‘s vaccine policy class. (Incidentally, Orin just started a new gig with the Gates Foundation — congrats!) In that class and elsewhere I’ve heard Orin tell the story of quizzing folks on Capitol Hill and elsewhere in DC about the top three causes of death for children under five and time and again getting the answer “AIDS, TB and malaria.”

Those three diseases likely pop to mind because of the Global Fund, and because a lot of US funding for global health has been directed at them. And, to be fair, they’re huge public health problems and the metric of under-five mortality isn’t where AIDS hits hardest. But the real answer is pneumonia, diarrhea, and malnutrition. (Or malaria for #3 — it depends in part on whether you count malnutrition as a separate cause  or a contributor to other causes). The end result of this lack of awareness — and the prior lack of a domestic lobby — of pneumonia is that it gets underfunded in US global health efforts.

So, how to improve pneumonia’s profile? Today, November 12th, is the 4th annual World Pneumonia Day, and I think that’s a great start. I’m not normally one to celebrate every national or international “Day” for some causes, but for the aforementioned reasons I think this one is extremely important. You can follow the #WPD2012 hashtag on Twitter, or find other ways to participate on WPD’s act page. While they do encourage donations to the GAVI Alliance, you’ll notice that most of the actions are centered around raising awareness. I think that makes a lot of sense. In fact, just by reading this blog post you’ve already participated — though of course I hope you’ll do more.

I think politically-savvy efforts like World Pneumonia Day are especially important because they bridge a gap between the technical and policy experts. Precisely because so many people on both sides (the somewhat-false-but-still-helpful dichotomy of public health technical experts vs. political operatives) mostly interact with like-minded folks, we badly need campaigns like this to popularize simple facts within policy circles.

If your reaction to this post — and to another day dedicated to a good cause — is to feel a bit jaded, please recognize that you and your friends are exactly the sorts of people the World Pneumonia Day organizers are hoping to reach. At the very least, mention pneumonia today on Twitter or Facebook, or with your policy friends the next time health comes up.

Full disclosure: while at Hopkins I did a (very small) bit of paid work for IVAC, one of the WPD organizers, re: social media strategies for World Pneumonia Day, but I’m no longer formally involved. 

12

11 2012

A misuse of life expectancy

Jared Diamond is going back and forth with Acemoglu and Robinson over his review of their new book, Why Nations Fail. The exchange is interesting in and of itself, but I wanted to highlight one passage from Diamond’s response:

The first point of their four-point letter is that tropical medicine and agricultural science aren’t major factors shaping national differences in prosperity. But the reasons why those are indeed major factors are obvious and well known. Tropical diseases cause a skilled worker, who completes professional training by age thirty, to look forward to, on the average, just ten years of economic productivity in Zambia before dying at an average life span of around forty, but to be economically productive for thirty-five years until retiring at age sixty-five in the US, Europe, and Japan (average life span around eighty). Even while they are still alive, workers in the tropics are often sick and unable to work. Women in the tropics face big obstacles in entering the workforce, because of having to care for their sick babies, or being pregnant with or nursing babies to replace previous babies likely to die or already dead. That’s why economists other than Acemoglu and Robinson do find a significant effect of geographic factors on prosperity today, after properly controlling for the effect of institutions.

I’ve added the bolding to highlight an interpretation of what life expectancy means that is wrong, but all too common.

It’s analagous to something you may have heard about ancient Rome: since life expectancy was somewhere in the 30s, the Romans who lived to be 40 or 50 or 60 were incredibly rare and extraordinary. The problem is that life expectancy — by which we typically mean life expectancy at birth — is heavily skewed by infant mortality, or deaths under one year of age. Once you get to age five you’re generally out of the woods — compared to the super-high mortality rates common for infants (less than one year old) and children (less than five years old). While it’s true that there were fewer old folks in ancient Roman society, or — to use Diamond’s example — modern Zambian society, the difference isn’t nearly as pronounced as you might think given the differences in life expectancy.

Does this matter? And if so, why? One area where it’s clearly important is Diamond’s usage in the passage above: examining the impact of changes in life expectancy on economic productivity. Despite the life expectancy at birth of 38 years, a Zambian male who reaches the age of thirty does not just have eight years of life expectancy left — it’s actually 23 years!

Here it’s helpful to look at life tables, which show mortality and life expectancy at different intervals throughout the lifespan. This WHO paper by Alan Lopez et al. (PDF) examining mortality between 1990-9 in 191 countries provides some nice data: page 253 is a life table for Zambia in 1999. We see that males have a life expectancy at birth of just 38.01 years, versus 38.96 for females (this was one of the lowest in the world at that time). If you look at that single number you might conclude, like Diamond, that a 30-year old worker only has ~10 years of life left. But the life expectancy for those males remaining alive at age 30 (64.2% of the original birth cohort remains alive at this age) is actually 22.65 years. Similarly, the 18% of Zambians who reach age 65, retirement age in the US, can expect to live an additional 11.8 years, despite already having lived 27 years past the life expectancy at birth.

These numbers are still, of course, dreadful — there’s room for decreasing mortality at all stages of the lifespan. Diamond’s correct in the sense that low life expectancy results in a much smaller economically active population. But he’s incorrect when he estimates much more drastic reductions in the economically productive years that workers can expect once they reach their economically productive 20s, 30s, and 40s.

—-

[Some notes: 1. The figures might be different if you limit it to “skilled workers” who aren’t fully trained until age 30, as Diamond does; 2. I’m also assumed that Diamond is working from general life expectancy, which was similar to 40 years total, rather than a particular study that showed 10 years of life expectancy at age 30 for some subset of skilled workers, possibly due to high HIV prevalence — that seems possible but unlikely; 3. In these Zambia estimates, about 10% of males die before reaching one year of age, or over 17% before reaching five years of age. By contrast, between the ages of 15-20 only 0.6% of surviving males die, and you don’t see mortality rates higher than the under-5 ones until above age 85!; and 4. Zambia is an unusual case because much of the poor life expectancy there is due to very high HIV/AIDS prevalence and mortality — which actually does affect adult mortality rates and not just infant and child mortality rates. Despite this caveat, it’s still true that Diamond’s interpretation is off. ]

The great quant race

My Monday link round-up included this Big Think piece asking eight young economists about the future of their field. But, I wanted to highlight the response from Justin Wolfers:

Economics is in the midst of a massive and radical change.  It used to be that we had little data, and no computing power, so the role of economic theory was to “fill in” for where facts were missing.  Today, every interaction we have in our lives leaves behind a trail of data.  Whatever question you are interested in answering, the data to analyze it exists on someone’s hard drive, somewhere.  This background informs how I think about the future of economics.

Specifically, the tools of economics will continue to evolve and become more empirical.  Economic theory will become a tool we use to structure our investigation of the data.  Equally, economics is not the only social science engaged in this race: our friends in political science and sociology use similar tools; computer scientists are grappling with “big data” and machine learning; and statisticians are developing new tools.  Whichever field adapts best will win.  I think it will be economics.  And so economists will continue to broaden the substantive areas we study.  Since Gary Becker, we have been comfortable looking beyond the purely pecuniary domain, and I expect this trend towards cross-disciplinary work to continue.

I think it’s broadly true that economics will become more empirical, and that this is a good thing, but I’m not convinced economics will “win” the race. This tracks somewhat with the thoughts from Marc Bellemare that I’ve linked to before: his post on “Methodological convergence in the social sciences” is about the rise of mathematical formalism in social sciences other than economics. This complements the rise of empirical methods, in the sense that while they are different developments, both are only possible because of the increasing mathematical, statistical, and coding competency of researchers in many fields. And I think the language of convergence is more likely to represent what will happen (and what is already happening), rather than the language of a “race.”

We’ve already seen an increase in RCTs (developed in medicine and epidemiology) in economics and political science, and the decades ahead will (hopefully) see more routine serious analysis of observational data in epidemiology and other fields (in the sense that the analysis is more careful about causal inference), and  advanced statistical techniques and machine learning methodologies will become commonplace across all fields as researchers deal with massive, complex longitudinal datasets gleaned not just from surveys but increasingly from everyday collection.

Economists have a head start in that their starting pool of talent is generally more mathematically competent than other social sciences’ incoming PhD classes. But, switching back to the “race” terminology, economics will only “win” if — as Wolfers speculates will happen — it can leverage theory as a tool for structuring investigation. My rough impression is that economic theory does play this role, sometimes, but it has also held empirical investigation in economics back at times, perhaps through publication bias (see on minimum wage) against empirical results that don’t fit the theory, and possibly more broadly through a general closure of routes of investigation that would not occur to someone already trained in economic theory.

Regardless, I get the impression that if you want to be a cutting-edge researcher in any social science you should be beefing up not only your mathematical and statistical training, but also your coding practice.

Update: Stevenson and Wolfers expand their thoughts in this excellent Bloomberg piece. And more at Freakonomics here.

08

08 2012

Stats lingo in econometrics and epidemiology

Last week I came across an article I wish I’d found a year or two ago: “Glossary for econometrics and epidemiology” (PDF from JSTOR, ungated version here) by Gunasekara, Carter, and Blakely.

Statistics is to some extent a common language for the social sciences, but there are also big variations in language that can cause problems when students and scholars try to read literature from outside their fields. I first learned epidemiology and biostatistics at a school of public health, and now this year I’m taking econometrics from an economist, as well as other classes that draw heavily on the economics literature.

Friends in my economics-centered program have asked me “what’s biostatistics?” Likewise, public health friends have asked “what’s econometrics?” (or just commented that it’s a silly name). In reality both fields use many of the same techniques with different language and emphases. The Gunasekara, Carter, and Blakely glossary linked above covers the following terms, amongst others:

  • confounding
  • endogeneity and endogenous variables
  • exogenous variables
  • simultaneity, social drift, social selection, and reverse causality
  • instrumental variables
  • intermediate or mediating variables
  • multicollinearity
  • omitted variable bias
  • unobserved heterogeneity

If you’ve only studied econometrics or biostatistics, chances are at least some of these terms will be new to you, even though most have roughly equivalent forms in the other field.

Outside of differing language, another difference is in the frequency with which techniques are used. For instance, instrumental variables seem (to me) to be under-used in public health / epidemiology applications. I took four terms of biostatistics at Johns Hopkins and don’t recall instrumental variables being mentioned even once! On the other hand, economists just recently discovered randomized trials. (Now they’re more widely used) .

But even within a given statistical technique there are important differences. You might think that all social scientists doing, say, multiple linear regression to analyze observational data or critiquing the results of randomized controlled trials would use the same language. In my experience they not only use different vocabulary for the same things, they also emphasize different things. About a third to half of my epidemiology coursework involved establishing causal models (often with directed acyclic graphs)  in order to understand which confounding variables to control for in a regression, whereas in econometrics we (very!) briefly discussed how to decide which covariates might cause omitted variable bias. These discussions were basically about the same thing, but they differed in terms of language and in terms of emphasis.

I think an understanding of how and why researchers from different fields talk about things differently helps you to understand the sociology and motivations of each field.  This is all related to what Marc Bellemare calls the ongoing “methodological convergence in the social sciences.” As research becomes more interdisciplinary — and as any applications of research are much more likely to require interdisciplinary knowledge — understanding how researchers trained in different academic schools think and talk will become increasingly important.

03

05 2012

Princeton epidemiology: norovirus edition

Princeton is in the midst of an outbreak of norovirus! What’s norovirus, you ask? Well, it looks like this:

Not helpful? Here’s the CDC fact sheet:

Noroviruses (genus Norovirus, family Caliciviridae) are a group of related, single-stranded RNA, non-enveloped viruses that cause acute gastroenteritis in humans. The most common symptoms of acute gastroenteritis are diarrhea, vomiting, and stomach pain. Norovirus is the official genus name for the group of viruses previously described as “Norwalk-like viruses” (NLV).

Noroviruses spread from person to person, through contaminated food or water, and by touching contaminated surfaces. Norovirus is recognized as the leading cause of foodborne-disease outbreaks in the United States. Outbreaks can happen to people of all ages and in a variety of settings. Read more about it using the following links.

My shorter translation: “Got an epidemic of nasty stomach problems in an institutional setting (like a nursing home or university)? It’s probably norovirus. Wash your hands a lot.”

The all-campus email I received earlier today is included below. Think of this as a real-time, less-sexy version of the CDC’s MMWR. Emphasis added:

To: Princeton University community

Date: Feb. 6, 2012

From: University Health Services and Environmental Health and Safety

Re: Update: Campus Hygiene Advisory

In light of continuing cases of gastroenteritis on campus, University Health Services and the Office of Environmental Health and Safety want to remind faculty, staff and students about increased attentiveness to personal hygienic practices.

A few of the recent cases have tested positive for norovirus, which is a common virus that causes gastroenteritis.  While it is usually not serious and most people recover in a few days, gastroenteritis can cause periods of severe sickness and can be highly contagious. You can prevent the spread of illness by practicing good hygiene, such as frequent hand washing, and limiting contact with others if sick.

Gastroenteritis includes symptoms of diarrhea, vomiting and abdominal cramps. Please take the following steps if you are experiencing symptoms:

–Ill students should refrain from close contact with others and contact University Health Services at 609-258-3129 or visit McCosh Health Center on Washington Road. Ill employees are encouraged to stay home and contact their personal physicians for medical assistance.

–Wash your hands frequently and carefully with soap and warm water, and always after using the bathroom.

–Refrain from close contact with others until symptoms have subsided, or as advised by medical staff.

–Do not handle or prepare food for others while experiencing symptoms and for two-to-three days after symptoms subside.

–Increase your intake of fluids, such as tea, water, sports drinks and soup broth, to prevent dehydration.

–Avoid sharing towels, beverage bottles, food, and eating utensils and containers.

–Clean and disinfect soiled surfaces with bleach-based cleaning products. Students and others on campus who need assistance with cleaning and disinfecting soiled surfaces may call Building Services at 609-258-8000. Building Services also will be increasing disinfection of frequent touch points, such as doorknobs and restroom fixtures.

–Clean all soiled clothes and linen. Soiled linen should be washed and dried in the hottest temperature recommended by the linen manufacturer.

In the past week, University Health Services has seen more than the usual number of students experiencing symptoms of acute gastroenteritis. The New Jersey Department of Health and Senior Services tested samples from a few of the cases, which were later found positive for norovirus. Because norovirus has been identified as the chief cause of gastroenteritis currently on campus, further testing is not planned at this time, but the University is urging community members to take steps to prevent the further spread of illness.

Noroviruses are the most common causes of gastroenteritis in the United States, according to the Center for Disease Control and Prevention. Anyone can become infected with gastroenteritis and presence of the illness may sometimes increase during winter months. While most people get better in a few days, gastroenteritis can be serious in young children, the elderly and people with other health conditions. Frequent hand washing with soap and warm water is your best defense against most communicable disease.

I bolded a few passages because I think the very last sentence (wash your hands) is actually the most important single part of the message and is much clearer than encouraging someone to increase “attentiveness to personal hygienic practices.” But still a good message overall. At least one friend has come down with this and it sounds unpleasant…

06

02 2012