Archive for May, 2011

Tornado epidemiology

The news out of Joplin, Missouri is heartbreaking, and it comes so quickly on the heels of the tornadoes that hit Tuscaloosa, Alabama. Central Arkansas, where I grew up, gets hit by tornadoes every spring, so I have plenty of memories of taking shelter in response to warnings. College nights with social plans ruined when we had to hunker down in an interior hallways. Dark, roiling clouds circling and the spooky calm when the rain and hail stop but the winds stay strong. Racing home from work to get to my house and its basement — a rarity in the South — before a particularly ominous storm hit. Neighboring communities were sometimes hit more directly by storms, and Harding students often participated in clean-up an recovering efforts, but my town was spared direct hits by the heaviest tornadoes.

So what does epidemiology have to say about tornadoes? Their paths aren’t exactly random, in the sense that some areas are more prone to storms that produce tornadoes. Growing up I knew where to take shelter: interior hallways away from windows if your house didn’t have a basement or a dedicated storm shelter. I also knew that mobile homes were a particularly bad place to be, and that the carnage was always worst when a tornado happened to hit a mobile home lot.

But there is some interesting research out there that tells us more than you might think. Obviously and thankfully you can’t do a randomized trial assigning some communities to get storms and others not, so the evidence of how to prevent tornado-related injury and death is mostly observational. What do we know? I’m not an expert on this but I did a quick, non-systematic scan and here’s what I found:

First, the annual tornado mortality rate has actually gone down quite a lot over the last few decades. That says nothing about the frequency and intensity of tornadoes themselves, which is a matter for meteorologists to research. The actual number deaths resulting from tornadoes would probably be a function of the number of people in the US, where they live and whether those areas are prone to tornadoes, the frequency and intensity of the tornadoes, and risk factors for people in the affected area once the tornado hits.

This NOAA site has the following graph of tornado mortality where the vertical axis is tornado deaths per million people in the US (on a log scale) and the horizontal axis covers 1875 – 2008.

Second, many of the risk factors for tornado injury and death are intuitive and suggest possible interventions to minimize risk in tornado-prone areas. Following tornadoes in North and South Carolina in 1984, Eidson et al. surveyed people hospitalized and family members of people who were killed, along with uninjured persons who were present when the surveyed individuals were hurt. The main types of injury were deep cuts, concussions, unconsciousness and broken bones. Risk factors included living in mobile homes, “advanced age (60+ years), no physical protection (not having been covered with a blanket or other object), having been struck by broken window glass or other falling objects, home lifted off its foundation, collapsed ceiling or floor, or walls blown away.” Some of those patterns might indicate potential tornado education interventions — better shelters for mobile home residents, targeting alerts to older residents, covering with a blanket, and staying in interior hallways, to say nothing of building codes to make more survivable structures.

Third, some things are less clear, like whether it’s safe to be in a car during a tornado. Daley et al. did a case-control study of tornado injuries and deaths in the aftermath of tornadoes in Oklahoma in 1999. They found higher risk of tornado death for those in mobile homes (odds ratio of 35.3, 95% CI 7.8 – 175.6) or outdoors (odds ratio of 141.2, 95% CI 15.9 – a whopping 6,379.8) compared to other houses. They found no difference in risk of death, severe injury, or minor injury among people in cars vs. those in houses. And they found that risk of death, severe injury, or minor injury was actually lower among those “fleeing their homes in motor vehicles than among those remaining.” That’s surprising to me, and contrary to much of the tornado-related safety warnings I heard from meteorologists and family growing up. I wonder if this particular study goes against the majority of findings, or whether there is a consensus based in data at all.

Fourth, our knowledge of tornadoes can be messy. One demographic approach to tornado risk factors (Donner 2007) is to look for correlations between tornado fatalities and injuries with rural population, population density, household size, racial minorities, deprivation/poverty, tornado watches and warnings, and mobile homes. Donner noted that “Findings suggest a strong relationship between the size of a tornado path and both fatalities and injuries, whereas other measures related to technology, population, and organization produce significant yet mixed results.”

That’s just a sampling of the literature on tornado epidemiology. The studies are interesting but relatively rare, at least from initial perusal. That’s probably because tornado deaths and injuries are relatively rare in the US. Still, the storms themselves are terrifying and they often wreak havoc on a single community and thus generate more sympathy and news coverage than a more frequent — and thus less extraordinary — problem like car crashes.

Update: NYT has an interesting article about tornado preparedness, including some speculation on why the Joplin tornado was so bad.

28

05 2011

This is all very meta

One of the best things about XKCD is that the mouse-over text (simply rendered using the title attribute in the <img> HTML tag) will almost always give you a second laugh or an interesting thought. This comic isn’t his best, but it has this great mouse-over text:

Wikipedia trivia: if you take any article, click on the first link in the article text not in parentheses or italics, and then repeat, you will eventually end up at “Philosophy.”

Naturally, I went to Wikipedia and clicked Random Article, which gave me the Bradford-Union Street Historic District in Plymouth, Massachusetts. The links followed in order were:

  1. Bradford-Union Street Historic District
  2. Plymouth, Massachusetts
  3. Plymouth County, Massachusetts
  4. County (United States)
  5. U.S. state
  6. Federated state
  7. Constitution
  8. State (polity)
  9. Institution
  10. Social structure
  11. Social sciences
  12. List of academic disciplines
  13. Academia
  14. Organism
  15. Community
  16. Interaction
  17. Causality
  18. Event
  19. Observable
  20. Physics
  21. Natural science
  22. Science
  23. Knowledge
  24. Fact
  25. Information
  26. Sequence
  27. Mathematics
  28. Quantity
  29. Property (philosophy)
  30. Modern philosophy
  31. Philosophy

According to the Wikipedia page on the phenomenon (of course there’s one, which also of course already referenced the XKCD mention) the longest known link chain is just 35 links back to philosophy, so my random find is way up there.

It’s interesting to observe how the selections work back from entries on specific, literal things to broader categories.  The selections go from place to categories of place to knowledge to a meta-description of what knowledge. I imagine that if you grouped Wikipedia entries by category you’d see similar chains leading back to philosophy. For instance, I think all place names should follow a similar trajectory to my example.

I also wonder what the distribution would look like if you took a list of all entries on Wikipedia and graphed them by this philosophy index number. I think all articles listed together would be messy, but a list of articles weighted by web traffic would yield a a logarithmic distribution with the bulk of the entries being people, places, or things that are far from philosophy but eventually link there. Also, the distribution of any single category (places in the United States, for example) should be more similar to a normal distribution, and the narrower the category is the more true that would be. Now if someone will just build a computer program to test my hypothesis.

27

05 2011

Best practices of ranking aid best practices

Aid Watch has a post up by Claudia Williamson (a post-doc at DRI) about the “Best and Worst of Official Aid 2011”. As Claudia summarizes, their paper looks at “five dimensions of agency ‘best practices’: aid transparency, minimal overhead costs, aid specialization, delivery to more effective channels, and selectivity of recipient countries based on poverty and good government” and calculates an overall agency score.

Williamson notes that the “scores only reflect the above practices; they are NOT a measure of whether the agency’s aid is effective at achieving good results.” Very true — but I think this can be easily overlooked. In their paper Easterly and Williamson say:

We acknowledge that there is no direct evidence that our indirect measures necessarily map into improved impact of aid on the intended beneficiaries. We will also point out specific occasions where the relationship between our measures and desirable outcomes could be non-monotonic or ambiguous.

But still, grouping these things together into a single index may obscure more than it enlightens. Transparency seems more of an unambiguous good, whereas overhead percentages are less so. Some other criticisms from the comments section that I’d like to highlight include one from someone named Bula:

DfID scores high and USAID scores low because they have fundamentally different missions. I doubt anyone at USAID or State would attempt to say with a straight face that AID is anything other than a public diplomacy tool. DfID as a stand alone ministry has made a serious effort in all of the areas you’ve measured because it’s mission aligns more closely to ‘doing development’ and less with ‘public diplomacy’. Seems to be common sense.

And a comment from Tom that starts with a quote from the Aid Watch post:

“These scores only reflect the above practices; they are NOT a measure of whether the agency’s aid is effective at achieving good results.”

Seriously? How can you possibly give an aid agency a grade based solely on criteria that have no necessary relationship with aid effectiveness? It is your HYPOTHESIS that transparency, overhead, etc, significantly affect the quality of aid, but without looking at actual effeciveness that hypothesis is completely unproven. An A or an F means absolutely nothing in this context. Without looking at what the agency does with the aid (i.e. is it effective), why should we care whether an aid agency has low or high overhead? To take another example, an aid agency could be the least transparent but achieve the best results; which matters more, your ideological view of how an agency “should” function, or that they achieve results? In my mind it’s the ends that matter, and we should then determine what the best means are to achieve that result. You approach it with an a priori belief that those factors are the most important, and therefore risk having ideology overrule effectiveness. Isn’t that criticism the foundation of this blog and Dr. Easterly’s work more generally?

Terence at Waylaid Dialectic has three specific criticisms worth reading and then ends with this:

I can see the appeal, and utility of such indices, and the longitudinal data in this one are interesting, but still think the limitations outweigh the merits, at least in the way they’re used here. It’s an interesting paper but ultimately more about heat than light.”

I’m not convinced the limitations outweigh the merits, but there are certainly problems. One is that the results quickly get condensed to “Britain, Japan and Germany do pretty well and the U.S. doesn’t.”

Another problem is that without having some measure of aid effectiveness, it seems that this combined metric may be misleading — analogous to a process indicator in a program evaluation. In that analogy, Program A might procure twice as many bednets as Program B, but that doesn’t mean it’s necessarily better, and for that you’d need to look at the impact on health outcomes. Maybe more nets is better. Or maybe the program that procures fewer bednets distributes them more intelligently and has a stronger impact. In the absence of data on health outcomes, is the process indicator useful or misleading? Well, it depends. If there’s a strong correlation (or even a good reason to believe) that the process and impact indicators go together, then it’s probably better than nothing. But if some of the aid best practices lead to better aid effectiveness, and some don’t, then it’s at best not very useful, and at worst will prompt agencies to move in the wrong direction.

As Easterly and Williamson note in their paper, they’re merely looking at whether aid agencies do what aid agencies say should be their best practices. However, without a better idea of the correlation between those aid practices and outcomes for the people who are supposed to benefit from the programs, it’s really hard to say whether this metric is (using Terence’s words) “more heat than light.”

It’s a Catch-22: without information on the correlation between best aid practices and real aid effectiveness it’s hard to say whether the best aid practices “process indicator” is enlightening or obfuscating, but if we had that data on actual aid effectiveness we would be looking at that rather than best practices in the first place.

12

05 2011

Miscellany: Epidemic City and life expectancy

In 8 days I’ll be done with my first year of graduate studies and will have a chance to write a bit more. I’ve been keeping notes all year on things to write about when I have more time, so I should have no shortage of material! In the meantime, two links to share:

1) Just in time for my summer working with the New York City Department of Health comes Epidemic City: The Politics of Public Health in New York. The Amazon / publisher’s blurb:

The first permanent Board of Health in the United States was created in response to a cholera outbreak in New York City in 1866. By the mid-twentieth century, thanks to landmark achievements in vaccinations, medical data collection, and community health, the NYC Department of Health had become the nation’s gold standard for public health. However, as the city’s population grew in number and diversity, new epidemics emerged, and the department struggled to balance its efforts between the treatment of diseases such as AIDS, multi-drug resistant tuberculosis, and West Nile Virus and the prevention of illness-causing factors like lead paint, heroin addiction, homelessness, smoking, and unhealthy foods. In Epidemic City, historian of public health James Colgrove chronicles the challenges faced by the health department in the four decades following New York City’s mid-twentieth-century peak in public health provision.

This insightful volume draws on archival research and oral histories to examine how the provision of public health has adapted to the competing demands of diverse public needs, public perceptions, and political pressure.

Epidemic City delves beyond a simple narrative of the NYC Department of Health’s decline and rebirth to analyze the perspectives and efforts of the people responsible for the city’s public health from the 1960s to the present. The second half of the twentieth century brought new challenges, such as budget and staffing shortages, and new threats like bioterrorism. Faced with controversies such as needle exchange programs and AIDS reporting, the health department struggled to maintain a delicate balance between its primary focus on illness prevention and the need to ensure public and political support for its activities.

In the past decade, after the 9/11 attacks and bioterrorism scares partially diverted public health efforts from illness prevention to threat response, Mayor Michael Bloomberg and Department of Health Commissioner Thomas Frieden were still able to work together to pass New York’s Clean Indoor Air Act restricting smoking and significant regulations on trans-fats used by restaurants. Because of Bloomberg’s willingness to exert his political clout, both laws passed despite opposition from business owners fearing reduced revenues and activist groups who decried the laws’ infringement upon personal freedoms. This legislation preventative in nature much like the 1960s lead paint laws and the department’s original sanitary code reflects a return to the 19th century roots of public health, when public health measures were often overtly paternalistic. The assertive laws conceived by Frieden and executed by Bloomberg demonstrate how far the mandate of public health can extend when backed by committed government officials.

Epidemic City provides a compelling historical analysis of the individuals and groups tasked with negotiating the fine line between public health and political considerations during the latter half of the twentieth century. By examining the department’s successes and failures during the ambitious social programs of the 1960s, the fiscal crisis of the 1970s, the struggles with poverty and homelessness in the 1980s and 1990s, and in the post-9/11 era, Epidemic City shows how the NYC Department of Health has defined the role and scope of public health services, not only in New York, but for the entire nation.

2) Aaron Carroll at the Incidental Economist writes about the subtleties of life expectancy. His main point is that infant mortality skews life expectancy figures so much that if you’re talking about end-of-life expectations for adults who have already passed those (historically) most perilous times as a youngster, you really need to look at different data altogether.

The blue points on the graph below show life expectancy for all races in the US at birth, while the red line shows life expectancy amongst those who have reached the age of 65. Ie, if you’re a 65-year-old who wants to know your chances of dying (on average!) in a certain period of time, it’s best to consult a more complete life table rather than life expectancy at birth, because you’ve already dodged the bullet for 65 years.

(from the Incidental Economist)