Life expectancy: what really mattered

National Geographic has a great series up on global population growth. We'll hit 7 billion people in 2011 - quite a milestone. One thing to quibble about from the article:

Moreover in 1798, the same year that Malthus published his dyspeptic tract, his compatriot Edward Jenner described a vaccine for smallpox—the first and most important in a series of vaccines and antibiotics that, along with better nutrition and sanitation, would double life expectancy in the industrializing countries, from 35 years to 77 today. It would take a cranky person to see that trend as gloomy: “The development of medical science was the straw that broke the camel’s back,” wrote Stanford population biologist Paul Ehrlich in 1968.

I've read statements like this - about increasing life expectancy and its reasons - many times, and it's almost always done in a certain order. Here, life expectancy increases result from "a series of vaccines and antibiotics that, along with better nutrition and sanitation..." It's hardly the most egregious wording I've seen. Often I'll read that "modern medicine" led to increases in life expectancy, so it's nice that the article specifically mentions vaccines, a preventive measure, instead of only the curative parts of modern Western medicine that we're more familiar with as adults.

But the even the formulation "vaccines and antibiotics along with better nutrition and sanitation" still seems problematic. Why is nutrition and sanitation always an afterthought? I don't have a citation handy, but my impression is that the vast majority of the increase in life expectancy stemmed from advances in sanitation and nutrition, while curative medicine (including antibiotics) played a much more minor role. (I would love to read a good paper outlining the relative contribution of changes in nutrition, sanitation, vaccination, antibiotics to life expectancy improvements - if you know of one, please post it in the comments.)

I think this bias stems in part from a larger bias toward seeing advances in public health as medical advances, rather than societal, economic, or political ones. Many (too many?) people working in the public health field have medical backgrounds. Modern medicine is shiny and fancy and dramatic, and (credit where it's due) has made some incredible advances.

Imagine you're given a Rawlsian choice between being born into:

  • World A, with the nutrition, sanitation and vaccination of modern Europe but with all of the doctors and drugs mysteriously raptured in a giant Hippocratic tribulation, or
  • World B, with the nutrition, sanitation and vaccination of ancient Rome but with Atul Gawande on standby with the latest treatments once you get sick

I'd definitely choose World A. (Though I'm glad we don't have to choose!)

To correct this bias, I think it would helpful if every time we mentioned the dramatic shift in life expectancy over the past few centuries, we emphasized that most of those gains are from reductions in child mortality, and that nutrition and sanitation deserve the lion's share of credit for those improvements. At the least, let's mention them first and then say "along with later, less important developments such as antibiotics."

Randomizing in the USA, ctd

[Update: There's quite a bit of new material on this controversy if you're interested. Here's a PDF of Seth Diamond's testimony in support of (and extensive description of) the evaluation at a recent hearing, along with letters of support from a number of social scientists and public health researchers. Also, here's a separate article on the City Council hearing at which Diamond testified, and an NPR story that basically rehashes the Times one. Michael Gechter argues that the testing is wrong because there isn't doubt about whether the program works, but, as noted in the comments there, doesn't note that denial-of-service was already part of the program because it was underfunded.] A couple weeks ago I posted a link to this NYTimes article on a program of assistance for the homeless that's currently being evaluated by a randomized trial. The Poverty Action Lab blog had some discussion on the subject that you should check out too.

The short version is that New York City has a housing assistance program that is supposed to keep people from becoming homeless, but they never gave it a truly rigorous evaluation. It would have been better to evaluate it up front (before the full program was rolled out) but they didn't do that, and now they are.  The policy isn't proven to work, and they don't have resources to give it to everyone anyway, so instead of using a waiting list (arguably a fair system) they're randomizing people into receiving the assistance or not, and then tracking whether they end up homeless. If that makes you a little uncomfortable, that's probably a good thing -- it's a sticky issue, and one that might wrongly be easier to brush aside when working in a different culture. But I think on balance it's still a good idea to evaluate programs when we don't know if they actually do what they're supposed to do.

The thing I want to highlight for now is the impact that the tone and presentation of the article impacts your reactions to the issue being discussed. There's obviously an effect, but I thought this would be a good example because I noticed that the Times article contains both valid criticisms of the program and a good defense of why it makes sense to test it.

I reworked the article by rearranging the presentation of those sections. Mostly I just shifted paragraphs, but in a few cases I rearranged some clauses as well. I changed the headline, but otherwise I didn't change a single word, other than clarifying some names when they were introduced in a different order than in the original. And by leading with the rationale for the policy instead of with the emotional appeal against it, I think the article gives a much different impression. Let me know what you think:

City Department Innovates to Test Policy Solutions

By CARA BUCKLEY with some unauthorized edits by BRETT KELLER

It has long been the standard practice in medical testing: Give drug treatment to one group while another, the control group, goes without.

Now, New York City is applying the same methodology to assess one of its programs to prevent homelessness. Half of the test subjects — people who are behind on rent and in danger of being evicted — are being denied assistance from the program for two years, with researchers tracking them to see if they end up homeless.

New York City is among a number of governments, philanthropies and research groups turning to so-called randomized controlled trials to evaluate social welfare programs.

The federal Department of Housing and Urban Development recently started an 18-month study in 10 cities and counties to track up to 3,000 families who land in homeless shelters. Families will be randomly assigned to programs that put them in homes, give them housing subsidies or allow them to stay in shelters. The goal, a HUD spokesman, Brian Sullivan, said, is to find out which approach most effectively ushered people into permanent homes.

The New York study involves monitoring 400 households that sought Homebase help between June and August. Two hundred were given the program’s services, and 200 were not. Those denied help by Homebase were given the names of other agencies — among them H.R.A. Job CentersHousing Court Answers and Eviction Intervention Services — from which they could seek assistance.

The city’s Department of Homeless Services said the study was necessary to determine whether the $23 million program, called Homebase, helped the people for whom it was intended. Homebase, begun in 2004, offers job training, counseling services and emergency money to help people stay in their homes.

The department, added commissioner Seth Diamond, had to cut $20 million from its budget in November, and federal stimulus money for Homebase will end in July 2012.

Such trials, while not new, are becoming especially popular in developing countries. In India, for example, researchers using a controlled trial found that installing cameras in classrooms reduced teacher absenteeism at rural schools. Children given deworming treatment in Kenya ended up having better attendance at school and growing taller.

“It’s a very effective way to find out what works and what doesn’t,” said Esther Duflo, an economist at the Massachusetts Institute of Technology who has advanced the testing of social programs in the third world. “Everybody, every country, has a limited budget and wants to find out what programs are effective.”

The department is paying $577,000 for the study, which is being administered by the City University of New York along with the research firm Abt Associates, based in Cambridge, Mass. The firm’s institutional review board concluded that the study was ethical for several reasons, said Mary Maguire, a spokeswoman for Abt: because it was not an entitlement, meaning it was not available to everyone; because it could not serve all of the people who applied for it; and because the control group had access to other services.

The firm also believed, she said, that such tests offered the “most compelling evidence” about how well a program worked.

Dennis P. Culhane, a professor of social welfare policy at the University of Pennsylvania, said the New York test was particularly valuable because there was widespread doubt about whether eviction-prevention programs really worked.

Professor Culhane, who is working as a consultant on both the New York and HUD studies, added that people were routinely denied Homebase help anyway, and that the study was merely reorganizing who ended up in that pool. According to the city, 5,500 households receive full Homebase help each year, and an additional 1,500 are denied case management and rental assistance because money runs out.

But some public officials and legal aid groups have denounced the study as unethical and cruel, and have called on the city to stop the study and to grant help to all the test subjects who had been denied assistance.

“They should immediately stop this experiment,” said the Manhattan borough president, Scott M. Stringer. “The city shouldn’t be making guinea pigs out of its most vulnerable.”

But, as controversial as the experiment has become, Mr. Diamond said that just because 90 percent of the families helped by Homebase stayed out of shelters did not mean it was Homebase that kept families in their homes. People who sought out Homebase might be resourceful to begin with, he said, and adept at patching together various means of housing help.

Advocates for the homeless said they were puzzled about why the trial was necessary, since the city proclaimed the Homebase program as “highly successful” in the September 2010 Mayor’s Management Report, saying that over 90 percent of families that received help from Homebase did not end up in homeless shelters. One critic of the trial, Councilwoman Annabel Palma, is holding a General Welfare Committee hearing about the program on Thursday.

“I don’t think homeless people in our time, or in any time, should be treated like lab rats,” Ms. Palma said.

“This is about putting emotions aside,” [Mr. Diamond] said. “When you’re making decisions about millions of dollars and thousands of people’s lives, you have to do this on data, and that is what this is about.”

Still, legal aid lawyers in New York said that apart from their opposition to the study’s ethics, its timing was troubling because nowadays, there were fewer resources to go around.

Ian Davie, a lawyer with Legal Services NYC in the Bronx, said Homebase was often a family’s last resort before eviction. One of his clients, Angie Almodovar, 27, a single mother who is pregnant with her third child, ended up in the study group denied Homebase assistance. “I wanted to cry, honestly speaking,” Ms. Almodovar said. “Homebase at the time was my only hope.”

Ms. Almodovar said she was told when she sought help from Homebase that in order to apply, she had to enter a lottery that could result in her being denied assistance. She said she signed a letter indicating she understood. Five minutes after a caseworker typed her information into a computer, she learned she would not receive assistance from the program.

With Mr. Davie’s help, she cobbled together money from the Coalition for the Homeless and a public-assistance grant to stay in her apartment. But Mr. Davie wondered what would become of those less able to navigate the system. “She was the person who didn’t fall through the cracks,” Mr. Davie said of Ms. Almodovar. “It’s the people who don’t have assistance that are the ones we really worry about.”

Professor Culhane said, “There’s no doubt you can find poor people in need, but there’s no evidence that people who get this program’s help would end up homeless without it.”


As evidenced by my posting schedule, things have been busy. The quarter system is kind of fast -- since August we've already had first quarter midterms and finals and second quarter midterms and are now finishing those up and gearing up for finals. I have a lot of thoughts I want to share -- and a whole folder full of PDFs to post about -- but for now I'll just share this comic I drew in my biostats class:

Stats 101 for Policymakers

It's a problem that is easy to recognize, but hard to get around: policy isn't made by public health epidemiologists or statisticians. In between the researchers (who have their own biases) and the policy makers is a whole industry of  interest groups and advocates. Of course, I've been one of those advocates at times, as has pretty much everyone who has worked in public health or politics. Why am I thinking about this? I just read "Interpreting health statistics for policymaking: the story behind the headlines" in the Lancet (available for free here) by Neff Walker et al. The paper outlines this problem:

Estimates would be more credible if they come from technical groups that are independent of the organisations that implement programmes and advocate for funds.

Maternal mortality is much more inequitably distributed than neonatal mortality, but does that mean we should focus on it more? Of course, some of this comes down to fundamental philosophical differences concerning whether we should concentrate our health investments where they will make the most difference in terms of absolute numbers of lives saved, or where they will make the most difference in terms of reducing health care inequalities.

Statements like “Maternal mortality is 100-fold higher in many low-income countries than in high-income countries” sends a clear message with respect to inequities, but no information about the absolute magnitude of the problem. Statements more useful to decisionmakers are those that use a standard metric to provide sets of meaningful comparisons. For example, the ratio of inequity between low-income and high income countries for deaths from severe neonatal infections is far lower, at 11-fold. In absolute numbers, however, two to three times as many lives are lost to neonatal infections each year (1·4 million) in developing countries than to maternal mortality (500 000).

It's easy to criticize HIV/AIDS advocates because they're such, well, good advocates. Example 1:

For example, a common practice is to present an estimate at the global or regional level and then to elaborate on it by giving a specific and often unrepresentative example. HIV/AIDS advocates talking about the eff ect of AIDS on under-5 mortality often use as examples countries in southern Africa where AIDS accounts for 30–50% of deaths in under-5s. But for sub-Saharan Africa as a whole, AIDS is thought to account for less than 10% of under-5 deaths.

Example 2:

Advocates of funding for [HIV/AIDS] often quote the cumulative number of global deaths from HIV/AIDS since it was first identified. But, if historical estimates were used for other diseases, the number of HIV/AIDS deaths would be small in comparison. For example, if the same statistical procedures were applied for pneumonia as for HIV/AIDS, the cumulative deaths since 1975 would be about 60 million—almost three times the estimated cumulative deaths from AIDS in the same time period.

They end the paper with a list of recommendations for how policymakers should consider health stats coming from advocates or any other source.