Archive for April, 2013

"When public health works, it's invisible"

Caitlin Rivers’ post on the “public health paradox: why people don’t get flu shots” hits the nail on the head:

Unfortunately, the root of this problem is deep. The problem is that when public health works, it is invisible. It’s an insidious, persistent public relations issue that plagues public health. Nobody sees when a chain of disease transmission is broken, or when contaminated food is prevented from reaching the market, or when toxic pollutants don’t enter the environment. That’s the point: the goal of public health is prevention, not reaction….

What then can be done to counteract these misperceptions? First, public health needs to be more vocal about its successes. This graphic of crude death rates for infectious diseases during the 19th century, for example, should be widely disseminated. A little self-promotion could go a long ways.

That’s one reason I like Millions Saved, from the Center for Global Development — it highlights “proven success in global health.” One of the things that struck me when reading it was that most of the people who benefited from these interventions and programs would have no way of knowing that they benefited.

For another positive take, check out Charles Kenny’s book Getting Better.

 

22

04 2013

ICD-9 E845

Today in class I learned something that improbably brings together my interests in public health and rockets: there’s an ICD (International Classification of Disease) code, E845, for “accident involving spacecraft.” Seriously — a WSJ blog post from 2008 confirms.

 

 

09

04 2013

Monday miscellany

08

04 2013

(Not) knowing it all along

David McKenzie is one of the guys behind the World Bank’s excellent and incredibly wonky Development Impact blog. He came to Princeton to present on a new paper with Gustavo Henrique de Andrade and Miriam Bruhn, “A Helping Hand or the Long Arm of the Law? Experimental evidence on what governments can do to formalize firms” (PDF). The subject matter — trying to get small, informal companies to register with the government — is outside my area of expertise. But I thought there were a couple methodologically interesting bits:

First, there’s an interesting ethical dimension, as one of their several interventions tested was increasing the likelihood that a firm would be visited by a government inspector (i.e., that the law would be enforced). From page 10:

In particular, if a firm owner were interviewed about their formality status, it may not be considered ethical to then use this information to potentially assign an inspector to visit them. Even if it were considered ethical (since the government has a right to ask firm owners about their formality status, and also a right to conduct inspections), we were still concerned that individuals who were interviewed in a baseline survey and then received an inspection may be unwilling to respond to a follow-up. Therefore a listing stage was done which did not involve talking to the firm owner.

In other words, all their baseline data was collected without actually talking to the firms they were studying — check out the paper for more on how they did that.

Second, they did something that could (and maybe should) be incorporated into many evaluations with relative ease. Because findings often seem obvious after we hear them, McKenzie et al. asked the government staff whose program they were evaluating to estimate what the impact would be before the results were in. Here’s that section (emphasis added):

A standard question with impact evaluations is whether they deliver new knowledge or merely formally confirm the beliefs that policymakers already have (Groh et al, 2012). In order to measure whether the results differ from what was anticipated, in January 2012 (before any results were known) we elicited the expectations of the Descomplicar [government policy] team as to what they thought the impacts of the different treatments would be. Their team expected that 4 percent of the control group would register for SIMPLES [the formalization program] between the baseline and follow-up surveys. We see from Table 7 that this is an overestimate…

They then expected the communication only group to double this rate, so that 8 percent would register, that the free cost treatment would lead to 15 percent registering, and that the inspector treatment would lead to 25 percent registering…. The zero or negative impacts of the communication and free cost treatments therefore are a surprise. The overall impact of the inspector treatment is much lower than expected, but is in line with the IV estimates, suggesting the Descomplicar team have a reasonable sense of what to expect when an inspection actually occurs, but may have overestimated the amount of new inspections that would take place. Their expectation of a lack of impact for the indirect inspector treatment was also accurate.

This establishes exactly what in the results was a surprise and what wasn’t. It might also make sense for researchers to ask both the policymakers they’re working with and some group of researchers who study the same subject to give such responses; it would certainly help make a case for the value of (some) studies.

An uphill battle

I took this photo in the NYC subway a few days ago. My apologies for the quality, but I thought it’s a great juxtaposition:

In the top of the photo is an ad from the NYC Department of Health, advising you to choose food with less sodium. (Here’s an AP story about the ads.) But to the bottom right is an ad for McDonald’s dollar menu, and those are everywhere. While it doesn’t mean we shouldn’t run such ads, it’s worth remembering that the sheer volume of food advertising will always dwarf opposing health messages. 

03

04 2013

Fun projects are fun

Jay Ulfelder, of the blog Dart-Throwing Chimp, recently wrote a short piece in praise of fun projects. He links to my Hunger Games survival analysis, and Alex Hanna’s recent application of survival analysis to a reality TV show, RuPaul’s Drag Race. (That single Hunger Games post has accounted for about one-third of the ~100k page views this blog got in the last year!) Jay’s post reminded me that I never shared links to Alex’s survival analysis, which is a shame, so here goes:

First, there’s “Lipsyncing for your life: a survival analysis of RuPaul’s Drag Race”:

I don’t know if this occurs with other reality shows (this is the first I’ve been taken with), but there is some element of prediction involved in knowing who will come out as the winner. A drag queen we spoke with at Plan B suggested that the length of time each queen appears in the season preview is an indicator, while Homoviper’s “index” is largely based on a more qualitative, hermeneutic analysis. I figured, hey, we could probably build a statistical model to know which factors are the most determinative in winning the competition.

And then come two follow-ups, where Alex digs into predictions for the next episode of the current season, and again for the one after that. That last post is a great little lesson on the importance of the proportional hazards assumption.

I strongly agree with this bit from Jay’s post about the value of these projects:

Based on personal experience, I’m a big believer in learning by doing. Concepts don’t stick in my brain when I only read about them; I’ve got to see the concepts in action and attach them to familiar contexts and examples to really see what’s going on.

Right on. And in addition to being useful, these projects are, well, fun!

02

04 2013