Archive for December, 2010

Gates and Media Funding

You may or may not have heard of this controversy: the Gates Foundation — a huge funding source in global health — has been paying various media sources to ramp up their coverage of global health and development issues. It seems to me that various voices in global health have tended to respond to this as you might expect them to, based on their more general reactions to the Gates Foundation. If you like most of Gates does, you probably see this as a boon, since global health and development (especially if you exclude disaster/aid stories) aren’t the hottest issues in the media landscape. If you’re skeptical of the typical Gates Foundation solutions (technological fixes, for example) then you might think this is more problematic.

I started off writing some lengthy thoughts on this, and realized Tom Paulson at Humanosphere has already said some of what I want to say. So I’ll quote from him a bit, and then finish with a few more of my own thoughts. First, here is an interview Paulson did with Kate James, head of communications at the Gates Foundation. An excerpt:

Q Why does the Gates Foundation fund media?

Kate James: It’s driven by our recognition of the changing media landscape. We’ve seen this big drop-off in the amount of coverage of global health and development issues. Even before that, there was a problem with a lack of quality, in-depth reporting on many of these issues so we don’t see this as being internally driven by any agenda on our part. We’re responding to a need.

Q Isn’t there a risk that by paying media to do these stories the Gates Foundation’s agenda will be favored, drowning out the dissenting voices and critics of your agenda?

KJ: When we establish these partnerships, everyone is very clear that there is total editorial independence. How these organizations choose to cover issues is completely up to them.

The most recent wave of controversy seems to stem from Gates funding going to an ABC documentary on global health that featured clips of Bill and Melinda Gates, among other things. Paulson writes about that as well. Reacting to a segment on Guatemala, Paulson writes:

For example, many would argue that part of the reason for Guatemala’s problem with malnutrition and poverty stems from a long history of inequitable international trade policies and American political interference (as well as corporate influence) in Central America.

The Gates Foundation steers clear of such hot-button political issues and we’ll see if ABC News does as well. Another example of a potential “blind spot” is the Seattle philanthropy’s tendency to favor technological solutions — such as vaccines or fortified foods — as opposed to messier issues involving governance, industry and economics.

A few additional thoughts:

Would this fly in another industry? Can you imagine a Citibank-financed investigative series on the financial industry? That’s probably a bad example for several reasons, including the Citibank-Gates comparison and the fact that the financial industry is not underreported. I’m having a hard time thinking of a comparable example: an industry that doesn’t get much news coverage, where a big actor funded the media — if you can think of an example, please let me know.

Obviously this induces a bias in the coverage. To say otherwise is pretty much indefensible to me. Think of it this way: if Noam Chomsky had a multi-billion dollar foundation that gave grants to the media to increase news coverage of international development, but did not have specific editorial control, would that not still bias the resulting coverage? Would an organization a) get those grants if it were not already likely to do the cover the subject with at last a gentle, overall bias towards Chomsky’s point of view, or b) continue to get grants for new projects if they widely ridiculed Chomsky’s approach? It doesn’t have to be Chomsky — take your pick of someone with clearly identifiable positions on international issues, and you get the same picture. Do the communications staffers at the Gates Foundation need to personally review the story lines for this sort of bias to creep in? Of course not.

Which matters more: the bias or the increased coverage? For now I lean towards increased coverage, but this is up for debate. It’s really important that the funding be disclosed (as I understand it has been). It would also be nice if there was enough public demand for coverage of international development that the media covered it in all its complexity and difficulty and nuance without needing support from a foundation, but that’s not the world we live in for now. And maybe the funded coverage will ultimately result in more discussion of the structural and systemic roots of international inequality, rather than just “quick fixes.”

[Other thoughts on Gates and media funding by Paul Fortner, the Chronicle of Philanthropy, and (older) LA Times.]

Grad School Buffet

My program only requires one full academic year of coursework. The second year is a mix of a field practicum, work on a masters paper, and additional courses for those who choose to take some (and many do). But most students complete the core requirements for the degree in the first academic year, which is composed of four quarters. So far I’ve completed the first two quarters, which included 10 classes and 3 additional seminars for a total of 43 credits. Now I have to decide which classes to take in the 3rd and 4th quarters (January through May), when I have fewer required classes and more electives are offered.

Fellow Hopkins GDEC (global disease epidemiology and control) student Kriti at EpiTales describes the selection of public health courses at Hopkins as a buffet. Quite true. I’ve been trying to narrow down my courses for the 3rd and 4th terms and have come up with a preliminary list, excluding many classes that are redundant, don’t fit my interests, or have prerequisites that I haven’t taken. After narrowing it down a bit, I’m down a list of a mere 42 courses, or which I’ll be able to take 10-11 at most:

Armed Conflict and Health
Assessing Epidemiologic Impact of Human Rights Violations
Clinical and Epidemiologic Aspects of Tropical Diseases*
Clinical Vaccine Trials and Good Clinical Practice*
Comparative Evaluation for Health Policy in International Health
Current Issues In Public Health
Data Management Methods in Health Research Studies
Demographic Estimation for Developing Countries
Demographic Methods for Public Health
Design and Conduct of Community Trials*
Econometric Methods for Evaluation of Health Programs
Emerging Infections
Epidemiologic Inference in Outbreak Investigations
Ethics of Public Health Practice in Developing Countries
Ethnographic Fieldwork
Fundamentals of Budgeting and Financial Management*
Fundamentals of Program Evaluation
GDEC seminar (required)
Global Disease Control Programs and Policies (required)
Global Sustainability and Health Seminar
Global Tobacco Control
History of International Health and Development
History of Public Health
Infectious Diseases and Child Survival*
Intro to SAS Statistical Package
Introduction to Urban Health
Large-scale Effectiveness Evaluations of Health Programs
Nutrition in Disease Treatment and Prevention
Pandemics of the 20th Century
Poverty, Economic Development, and Heath
Professional Epi Methods I
Professional Epi Methods II
Project Development for Primary Health Care in Developing Countries
Public Health Practice*
Scientific Grant Writing
Spatial Analysis and GIS I
Statistical Methods for Sample Surveys
Statistical Methods in Public Health III (required)
Statistical Methods in Public Health IV (required)
Systematic Reviews and Meta-Analysis
Vaccine Policy Issues

*Classes noted in bold are required, while those in italics meet some other requirements (I have to choose one from a cluster of courses on a subject area — it’s a bit too complicated to explain here).

Obviously I’ll have to narrow it down a bit more. While I would probably enjoy most everything on the list, my strategy is to concentrate on coursework in epidemiology, biostatistics, and statistical software (we use Stata in the required biostatistics series, and I would also like to be familiar with SAS and ArcGIS). Then I’ll prioritize courses that provide additional skills in program evaluation and trial design and execution. If I have time left in my schedule I’ll get to take the other things — but it looks like I would need to go back for a second plate at the buffet to be able to take even half of these.

(If you’re a prospective student and want to browse for yourself, the JHSPH course search feature is here).

28

12 2010

Peace on Earth

The START treaty is one step closer to ratification in Russia. Of course, it probably doesn’t affect the probability that we’ll all die in a nuclear war in any significant way, but I still generally think a world with fewer nuclear warheads sitting around is a good thing.

The story I linked to is still missing something: a broader analysis of why the leadership of both countries is so firmly behind nuclear reduction. I understand that it’s expensive to maintain the weapons, and that both sides likely see the reduction as having little effect on their deterrent capabilities, but there’s a third reason. I’ve read — I can’t remember where, and would love to be pointed to links in the comments — that the worldwide use of nuclear full for power generation outpaces worldwide mining of fissionable materials and that continued destruction of old warheads is thus necessary to keep nuclear power cheap. If that’s true, it’s a pretty key fact that’s being left out of coverage. Wouldn’t the story be different if it was framed as “there’s a shortage of nuclear fuel and if the elites can’t figure out a way to keep disarmament going, it will affect energy prices in the long term”?

I think this underscores a shortcoming of traditional journalism. By focusing on key individuals doing key things and making public statements, broader historical, social, and economic trends can get missed, or downplayed. We end up with a “Great Men” first draft of history rather than a more complete and true picture.

Anyway, Merry Christmas.

24

12 2010

Randomizing in the USA, ctd

[Update: There’s quite a bit of new material on this controversy if you’re interested. Here’s a PDF of Seth Diamond’s testimony in support of (and extensive description of) the evaluation at a recent hearing, along with letters of support from a number of social scientists and public health researchers. Also, here’s a separate article on the City Council hearing at which Diamond testified, and an NPR story that basically rehashes the Times one. Michael Gechter argues that the testing is wrong because there isn’t doubt about whether the program works, but, as noted in the comments there, doesn’t note that denial-of-service was already part of the program because it was underfunded.]

A couple weeks ago I posted a link to this NYTimes article on a program of assistance for the homeless that’s currently being evaluated by a randomized trial. The Poverty Action Lab blog had some discussion on the subject that you should check out too.

The short version is that New York City has a housing assistance program that is supposed to keep people from becoming homeless, but they never gave it a truly rigorous evaluation. It would have been better to evaluate it up front (before the full program was rolled out) but they didn’t do that, and now they are.  The policy isn’t proven to work, and they don’t have resources to give it to everyone anyway, so instead of using a waiting list (arguably a fair system) they’re randomizing people into receiving the assistance or not, and then tracking whether they end up homeless. If that makes you a little uncomfortable, that’s probably a good thing — it’s a sticky issue, and one that might wrongly be easier to brush aside when working in a different culture. But I think on balance it’s still a good idea to evaluate programs when we don’t know if they actually do what they’re supposed to do.

The thing I want to highlight for now is the impact that the tone and presentation of the article impacts your reactions to the issue being discussed. There’s obviously an effect, but I thought this would be a good example because I noticed that the Times article contains both valid criticisms of the program and a good defense of why it makes sense to test it.

I reworked the article by rearranging the presentation of those sections. Mostly I just shifted paragraphs, but in a few cases I rearranged some clauses as well. I changed the headline, but otherwise I didn’t change a single word, other than clarifying some names when they were introduced in a different order than in the original. And by leading with the rationale for the policy instead of with the emotional appeal against it, I think the article gives a much different impression. Let me know what you think:

City Department Innovates to Test Policy Solutions

By CARA BUCKLEY with some unauthorized edits by BRETT KELLER

It has long been the standard practice in medical testing: Give drug treatment to one group while another, the control group, goes without.

Now, New York City is applying the same methodology to assess one of its programs to prevent homelessness. Half of the test subjects — people who are behind on rent and in danger of being evicted — are being denied assistance from the program for two years, with researchers tracking them to see if they end up homeless.

New York City is among a number of governments, philanthropies and research groups turning to so-called randomized controlled trials to evaluate social welfare programs.

The federal Department of Housing and Urban Development recently started an 18-month study in 10 cities and counties to track up to 3,000 families who land in homeless shelters. Families will be randomly assigned to programs that put them in homes, give them housing subsidies or allow them to stay in shelters. The goal, a HUD spokesman, Brian Sullivan, said, is to find out which approach most effectively ushered people into permanent homes.

The New York study involves monitoring 400 households that sought Homebase help between June and August. Two hundred were given the program’s services, and 200 were not. Those denied help by Homebase were given the names of other agencies — among them H.R.A. Job CentersHousing Court Answers and Eviction Intervention Services — from which they could seek assistance.

The city’s Department of Homeless Services said the study was necessary to determine whether the $23 million program, called Homebase, helped the people for whom it was intended. Homebase, begun in 2004, offers job training, counseling services and emergency money to help people stay in their homes.

The department, added commissioner Seth Diamond, had to cut $20 million from its budget in November, and federal stimulus money for Homebase will end in July 2012.

Such trials, while not new, are becoming especially popular in developing countries. In India, for example, researchers using a controlled trial found that installing cameras in classrooms reduced teacher absenteeism at rural schools. Children given deworming treatment in Kenya ended up having better attendance at school and growing taller.

“It’s a very effective way to find out what works and what doesn’t,” said Esther Duflo, an economist at the Massachusetts Institute of Technology who has advanced the testing of social programs in the third world. “Everybody, every country, has a limited budget and wants to find out what programs are effective.”

The department is paying $577,000 for the study, which is being administered by the City University of New York along with the research firm Abt Associates, based in Cambridge, Mass. The firm’s institutional review board concluded that the study was ethical for several reasons, said Mary Maguire, a spokeswoman for Abt: because it was not an entitlement, meaning it was not available to everyone; because it could not serve all of the people who applied for it; and because the control group had access to other services.

The firm also believed, she said, that such tests offered the “most compelling evidence” about how well a program worked.

Dennis P. Culhane, a professor of social welfare policy at the University of Pennsylvania, said the New York test was particularly valuable because there was widespread doubt about whether eviction-prevention programs really worked.

Professor Culhane, who is working as a consultant on both the New York and HUD studies, added that people were routinely denied Homebase help anyway, and that the study was merely reorganizing who ended up in that pool. According to the city, 5,500 households receive full Homebase help each year, and an additional 1,500 are denied case management and rental assistance because money runs out.

But some public officials and legal aid groups have denounced the study as unethical and cruel, and have called on the city to stop the study and to grant help to all the test subjects who had been denied assistance.

“They should immediately stop this experiment,” said the Manhattan borough president, Scott M. Stringer. “The city shouldn’t be making guinea pigs out of its most vulnerable.”

But, as controversial as the experiment has become, Mr. Diamond said that just because 90 percent of the families helped by Homebase stayed out of shelters did not mean it was Homebase that kept families in their homes. People who sought out Homebase might be resourceful to begin with, he said, and adept at patching together various means of housing help.

Advocates for the homeless said they were puzzled about why the trial was necessary, since the city proclaimed the Homebase program as “highly successful” in the September 2010 Mayor’s Management Report, saying that over 90 percent of families that received help from Homebase did not end up in homeless shelters. One critic of the trial, Councilwoman Annabel Palma, is holding a General Welfare Committee hearing about the program on Thursday.

“I don’t think homeless people in our time, or in any time, should be treated like lab rats,” Ms. Palma said.

“This is about putting emotions aside,” [Mr. Diamond] said. “When you’re making decisions about millions of dollars and thousands of people’s lives, you have to do this on data, and that is what this is about.”

Still, legal aid lawyers in New York said that apart from their opposition to the study’s ethics, its timing was troubling because nowadays, there were fewer resources to go around.

Ian Davie, a lawyer with Legal Services NYC in the Bronx, said Homebase was often a family’s last resort before eviction. One of his clients, Angie Almodovar, 27, a single mother who is pregnant with her third child, ended up in the study group denied Homebase assistance. “I wanted to cry, honestly speaking,” Ms. Almodovar said. “Homebase at the time was my only hope.”

Ms. Almodovar said she was told when she sought help from Homebase that in order to apply, she had to enter a lottery that could result in her being denied assistance. She said she signed a letter indicating she understood. Five minutes after a caseworker typed her information into a computer, she learned she would not receive assistance from the program.

With Mr. Davie’s help, she cobbled together money from the Coalition for the Homeless and a public-assistance grant to stay in her apartment. But Mr. Davie wondered what would become of those less able to navigate the system. “She was the person who didn’t fall through the cracks,” Mr. Davie said of Ms. Almodovar. “It’s the people who don’t have assistance that are the ones we really worry about.”

Professor Culhane said, “There’s no doubt you can find poor people in need, but there’s no evidence that people who get this program’s help would end up homeless without it.”

Arkansas

I arrived back in my hometown of Searcy, Arkansas. I haven’t been back in a year — I was living in DC from January through June, then traveling in Guatemala for the summer, and most recently living in Baltimore — so it’s good to be back. Searcy is ~18,000 people in central Arkansas, where the flat plains of the Mississippi Delta meet the first foothills of the Ozarks. The town once put up a billboard that proclaimed “Searcy, where thousands live as millions wish they could.” It’s also the home of Harding University, a conservative Christian university affiliated with the Church of Christ. My dad’s been a professor at Harding for decades, so Searcy has always been home and likely always will be.

Because I lived in the same small town for the first 23 years of my life, moving to Washington, DC in May of 2008 was a huge change. I had culture shock, but mostly in positive ways. When people would ask me if I liked DC, I would answer “Yes! But… I don’t really have anything to compare it to, because it’s my first city.” I was never sure if I just liked DC, or what I actually liked was the urban environment in general. (My friends from NYC laugh because DC hardly feels urban to them.) Now that I’ve lived for several months in my second city, Baltimore, I can say that I do like it, but maybe not as much as DC.

One realization I’ve had over the last year is that I believe the divide between urban and rural America (to dichotomize it) is as significant, or maybe more significant than the divide between liberal and conservative, religious and secular. Most of my friends from high school and college are rural, Southern, politically conservative (though often apathetic), married (some with kids on the way), and quite religious — of the evangelical Christian persuasion. No Jews, Muslims, Hindus, or Buddhists here. All of those adjectives (except married) once described me, but now I’m a politically liberal, secular, single young professional living in a big city. Yes, these traits are correlated: there are relatively few very religious young professionals living in big cities, there are relatively few hardcore secularists in rural Southern towns. But I think the urban/rural divide has a bigger impact on my daily experience, and on shaping my views and actions, than any of the others traits.

I think I’ve become a thoroughly urban creature, but the small town roots linger. I like so many things about cities: the density that leads to so many people, so many jobs and so much food, culture, entertainment and transportation all being close. But I also like the space and beauty of the small town. That’s kind of a universal American narrative in a way; we all like to think we were born — and remain rooted — in small towns, even though the majority of us live in cities. I appreciate having grown up in a small town, and it’s nice to be back for an occasional visit, but it’s hard for me to imagine coming back here to live.

———–

A few random observations from my visit back to Arkansas so far:

1) Little Rock ain’t that big, though it felt huge when I was growing up.

2) There’s so much space around the roads and freeways, and within the towns. The space gives you a sense of openness, but it also means you have to drive everywhere.

3) I’m at my favorite coffeeshop in town (one of two, and the only other options for places to hang out are churches, a Hastings, and Wal-Mart) and the first person who walked in the door after me is wearing a t-shirt that says (only) “JESUS”.

4) People are different. Strong Southern accents for one. A lot more baseball caps, and pickup trucks. Women are wearing more makeup. More overweight and obese people than you typically see on the streets in a city. Lots of white people, few of anything else.

5) Finally, today’s lunch spot was the Flying Pig BBQ:

21

12 2010

Culturomics

Several prominent scholars and the Google Books team have published a new paper that’s generating a lot of buzz (Google pun intended). The paper is in Science (available here) and (update) here’s the abstract:

We constructed a corpus of digitized texts containing about 4% of all books ever printed. Analysis of this corpus enables us to investigate cultural trends quantitatively. We survey the vast terrain of “culturomics”, focusing on linguistic and cultural phenomena that were reflected in the English language between 1800 and 2000. We show how this approach can provide insights about fields as diverse as lexicography, the evolution of grammar, collective memory, the adoption of technology, the pursuit of fame, censorship, and historical epidemiology. “Culturomics” extends the boundaries of rigorous quantitative inquiry to a wide array of new phenomena spanning the social sciences and the humanities.

It’s readable, thought-provoking, and touches on many fields of study, so I imagine it will be widely read and cited. Others have noted many of the highlights, so here are some brief bulleted thoughts:

  • The authors don’t explore the possible selection bias in their study. They note that the corpus of books they studied includes 4% of all published books. They specifically chose works that scanned better and have better metadata (author, date of publication, etc), so it seems quite likely that these works differ systematically from those that were scanned and not chosen, and differ even more from those not yet scanned. Will the conclusions hold up when new books are added? Since many of the results were based on random subsets of the books (or n-grams) they studied, will those results hold up when other scholars try and recreate them with separate randomly chosen subsets?
  • Speaking of metadata, I would love to see an analysis of social networks amongst authors and how that impacts word usage. If someone had a listing of, say, several hundred authors from one time period, and some measure of how well they knew each other, and combined that information with an analysis of their works, you might get some sense of how “original” various authors were, and whether originality is even important in becoming famous.
  • The authors are obviously going for a big splash and make several statements that are a bit provocative and likely to be quoted. It will be great to see these challenged and discussed in subsequent publications. One example that is quotable but may not be fully supported by the data they present: “We are forgetting our past faster with each passing year.” But is the frequency with which a year (their example is 1951) appears in books actually representative of collective forgetting?
  • I love the word plays. An example: “For instance, we found “found” (frequency: 5×10^-4) 200,000 times more often than we finded “finded.” In contrast, “dwelt” (frequency: 1×10^-5) dwelt in our data only 60 times as often as “dwelled” dwelled.”
  • The “n-grams” studied here (a collection of letters separate from others by spaces, which could be words, numbers, or typos) are too short for a study of copying and plagiarism, but similar approaches could yield insight into the commonality of copying or borrowing throughout history.

21

12 2010

Randomizing in the USA

The NYTimes posted this article about a randomized trial in New York City:

It has long been the standard practice in medical testing: Give drug treatment to one group while another, the control group, goes without.

Now, New York City is applying the same methodology to assess one of its programs to prevent homelessness. Half of the test subjects — people who are behind on rent and in danger of being evicted — are being denied assistance from the program for two years, with researchers tracking them to see if they end up homeless.

Dean Karlan at Innovations for Policy Action responds:

It always amazes me when people think resources are unlimited. Why is “scarce resource” such a hard concept to understand?

I think two of the most important points here are that a) there weren’t enough resources for everyone to get the services anyway, so they’re just changing the decision-making process for who gets the service from first-come-first-served (presumably) to randomized, and b) studies like this can be ethical when there is reasonable doubt about whether a program actually helps or not. If it were firmly established that the program is beneficial, then it’s unethical to test it, which is why you can’t keep testing a proven drug against placebo.

However, this is good food for thought for those who are interested in doing randomized trials of development initiatives in other countries. It shows the impact (and reactions) from individuals to being treated as “test subjects” here in the US — and why should we expect people in other countries to feel differently? That said, a lot of randomized trials don’t get this sort of pushback. I’m not familiar with this program beyond what I read in this article, but it’s possible that more could have been done to communicate the purpose of the trial to the community, activists, and the media.

There are some interesting questions raised in the IPA blog comments as well.

Monday Miscellany

  • More than I ever knew about Tycho Brahe. The possible death by mercury poisoning, the duel, the fake nose made of gold and silver, the clairvoyant dwarf jester under his table, and the pet elk.
  • Wikileaks is more a movement than a single site, as its model will outlive the shutdown of the site or event the death of its founder. From Foreign Policy: for better or worse, over 200 mirror sites have already been set up, and thousands of individuals have downloaded a heavily-encrypted “insurance” file with the State Department cables. Meanwhile, government workers were ordered not to read the cables…
  • The singularity is past.
  • Elizabeth Pisani on the myth of hypothesis-driven science.

06

12 2010

The Changing Face of Epidemiology

Unlike many scientific disciplines, undergraduate training in epidemiology is fairly rare. I’ve met a lot of public health students over the past few months, but only a few majored as an undergrad in public health or something similar, and I haven’t met anyone whose degree was in epidemiology. For the most part, people come to epidemiology from other fields: there are many physicians, and lots of pre-med student who decided they didn’t want to be doctors (like me) or still want to be. This has many implications for the field, including a bias towards looking at problems primarily through a biomedical lens, rather than through sociological, political, economic, or anthropological ones.

Another interesting consequence of this lack of (or only cursory) study of epidemiology before graduate school is that the introductory courses in epidemiology at most schools of public health are truly introductory. If you’re a graduate student in biochemistry and molecular biology (my undergraduate field), my guess is that it’s assumed you know something about the structure of nucleic acids, have drawn the Krebs cycle at some point, and may even have heard the PCR song.

In epidemiology we’re essentially starting from scratch, so there’s a need to move rapidly from having no common, shared knowledge, through learning basic vocabulary (odds ratios, relative risk differences, etc.), all the way to analyzing extremely complex research. This presents pedagogical difficulties, of course, and it also makes it easier to miss out on the “big picture” of the field of epidemiology.

For one of our earliest discussion labs in my epidemiologic methods course, we discussed a couple papers on smoking and lung cancer. While “everyone knows” today that smoking causes lung cancer, it’s a useful exercise to go back and look at the papers that actually established that as a scientific fact. In terms of teaching, it’s a great case-study for thinking about causality like an epidemiologist. After all,  most people who smoke never get lung cancer, and some people get lung cancer without ever smoking, so establishing causality requires a bit more thought. Two of the papers we read are great for illustrating some changes that have occurred as epidemiology has changed and matured over the last 50 years.

The first paper we looked at is “The Mortality of Doctors in Relation to Their Smoking Habits: A Preliminary Report,” written by Richard Doll and Bradford Hill in the British Medical Journal in 1954. (Free PDF here)  Doll and Hill followed up their groundbreaking study with “Mortality in relation to smoking: 50 years’ observations on male British doctors” in 2004 (available here).

A few observations: First, the 1954 paper is much shorter: around 4 1/2 pages of text compared to 8 1/2 in the 2004 article. The 1954 paper is much more readable as well: it’s conversational and uses much less specialized vocabulary (possibly because some of that vocabulary simply didn’t exist in 1954). The graphs are also crude and ugly compared to the computer-generated ones in 2004.

The 2004 paper also ends with this note: “Ethical approval: No relevant ethics committees existed in 1951, when the study began.”

Beyond the differences in style, length, and external approval by an ethics committee, the changes in authorship are notable. The original paper was authored by merely two people: a physician and a statistician. The 2004 paper adds two additional authors for a total of 4 (still small compared to many papers) — and notably, the two new authors are both female. During those 50 years there was of course great progress in terms of women’s representation in scientific research.  While that record is still spotty in some areas, schools of public health today are producing many more female scholars than males — for example, current public health students at Hopkins are 71% female.

There has been a definite shift from the small-scale collaboration resulting in a paper with an individual, conversational style to the large-scale collaboration resulting in an extremely institutional output. One excellent example of this is a paper I read for an assignment today: “Serum B Vitamin Levels and Risk of Lung Cancer” by Johansson et al. in JAMA, 2010 (available here).

The Johansson et al. paper has ~8 pages of text, 47 references, 2 tables and 2 figures (all of which are quite complicated) and a number of online supplements. Its 46 authors have between them (by my count) 33 PhDs, 27 MDs, 3 MPHs, and 6 other graduate degrees! It’s hard to tell gender just by name, but by my count at least half of the authors are likely female.

Clearly, epidemiology has changed a lot in the last 50 years. Gone are the days of (at least explicit) male domination. Many of the problems with the field today are related to information management and large-scale collaborations. Gone are the days of one or two researchers publishing ground-breaking studies on their own — many of the “easy” discoveries have been made. Yet many of the examples we learn from — and role models young public health researchers may want to emulate — are from an earlier era.

05

12 2010

Results-Based Aid

Nancy Birsdall writes “On Not Being Cavalier About Results” about a recent critique of the UK’s DFID (Department for International Development):

The fear about an insistence on results arises from confusion about what “results” are. A legitimate typical concern is that aid bureaucracies pressed for “results” will resort, more than already is the case, to projects that provide inputs that seem add up to easily measured “wins” (bednets delivered, books distributed, paramedics trained, vehicles or computers purchased, roads built) while neglecting “system” issues and “institution building”. But bednets and books and vehicles and roads are not results in any meaningful sense, and the connection between these inputs and real outcomes (healthier babies, better educated children, higher farmer income) goes through systems and institutions and is often lost….

Let us define results as measured gains in what children have learned by the end of primary school, or measured reductions in infant mortality or deforestation, or measured increases in the hours of electricity available, or annual increases in revenue from taxes paid by rich households in poor countries – or a host of other indicators that ultimately add up to the transformation of societies and the end of their dependence on outside aid. For a country to get results might not require more money but a reconfiguration of local politics, the cleaning up of bureaucratic red tape, local leadership in setting priorities or simply more exposure to the force of local public opinion. Let aid be more closely tied to well-defined results that recipient countries are aiming for; let donors and recipients start measuring and reporting those results to their own citizens; let there be continuous evaluation and learning about the mechanics of how recipient countries and societies get those results (their institutional shifts, their system reforms, their shifting politics and priorities), built on the transparency that Secretary Mitchell is often emphasizing.

(Emphasis added)

I’d also like to note that Birdsall is the founding director of the Center for Global Development, a nonprofit in DC that does a lot of work related to evidence-based aid. I relied fairly heavily on their report on “Closing the Evaluation Gap” on a recent dual degree app. The full report is worth the read.

03

12 2010