Archive for the ‘economics’Category

Someone should study this: Addis housing edition

Attention development economists and any other researchers who have an interest in urban or housing policy in low-income countries:

My office in Addis has about 25 folks working in it, and we have a daily lunch pool where we pay in 400 birr a month (about 22 USD) to cover costs and all get to eat Ethiopian food for lunch every day. It’s been a great way to get to know my coworkers — my work is often more solitary: editing, writing, and analyzing data — and an even better way to learn about a whole variety of issues in Ethiopia.

addis construction

Addis construction site (though not probably not government condos)

The conversation is typically in Amharic and mine is quite limited, so I’m lucky if I can figure out the topic being discussed.  [I usually know if they’re talking about work because so many NGO-speak words aren’t translated, for example: “amharic amharic amharic Health Systems Strengthening amharic amharic…“] But folks will of course translate things as needed.  One observation is that certain topics affect their daily lives a lot, and thus come up over and over again at lunch.

One subject that has come up repeatedly is housing. Middle class folks in Addis Ababa feel the housing shortage very acutely. Based on our conversations it seems the major limitation is in getting credit to buy or build a house.

The biggest source of good housing so far has been government-constructed condominiums, for which you pay a certain (I’m not sure how much) percentage down and then make payments over the years. (The government will soon launch a new “40/60 scheme” to which many folks are looking forward, in which anyone who can make a 40% down payment on a house will get a government mortgage for the remaining 60%.)

When my coworkers first mentioned that the government will offer the next round of condominiums by a public lottery, my thought was “that will solve someone’s identification problem!” A large number of people — many thousands — have registered for the government lottery. I believe you have to meet a certain wealth or income threshold (i.e., be able to make the down payment), but after that condo eligibility will be determined randomly. I think that — especially if someone organizes the study prior to the lottery — this could yield very useful results on the impact of urban housing policy.

How (and how much) do individuals and families benefit from access to better housing? Are there changes in earnings, savings, investments? Health outcomes? Children’s health and educational outcomes? How does it affect political attitudes or other life choices? It could also be an opportunity to study migration between different neighborhoods, amongst many other things.

A Google Scholar search for Ethiopia housing lottery turns up several mentions, but (in my very quick read) no evaluations taking advantage of the randomization. (I can’t access this recent article in an engineering journal, but from the abstract assume that it’s talking about a different kind of evaluation.) So, someone have at it? It’s just not that often that large public policy schemes are randomized.

06

12 2012

"As it had to fail"

My favorite line from the Anti-Politics Machine is a throwaway. The author, James Ferguson, an anthropologist, describes a World Bank agricultural development program in Lesotho, and also — through that lens — ends up describing development programs more generally. At one point he notes that the program failed “as it had to fail” — not really due to bad intentions, or to lack of technical expertise, or lack of funds — but because failure was written into the program from the beginning. Depressing? Yes, but valuable.

I read in part because Chris Blattman keeps plugging it, and then shortly before leaving for Ethiopia I saw that a friend had a copy I could borrow. Somehow it didn’t make it onto reading lists for any of my classes for either of my degrees, though it should be required for pretty much anyone wanting to work in another culture (or, for that matter, trying to foment change in your own). Here’s Blattman’s description:

People’s main assets [in Lesotho] — cattle — were dying in downturns for lack of a market to sell them on. Households on hard times couldn’t turn their cattle into cash for school fees and food. Unfortunately, the cure turned out to be worse than the disease.

It turns out that cattle were attractive investments precisely because they were hard to liquidate. With most men working away from home in South Africa, buying cattle was the best way to keep the family saving rather than spending. They were a means for men to wield power over their families from afar.

Ferguson’s point was that development organizations attempt to be apolitical at their own risk. What’s more, he argued that they are structured to remain ignorant of the historical, political and cultural context in which they operate.

And here’s a brief note from Foreign Affairs:

 The book comes to two main conclusions. First is that the distinctive discourse and conceptual apparatus of development experts, although good for keeping development agencies in business, screen out and ignore most of the political and historical facts that actually explain Third World poverty-since these realities suggest that little can be accomplished by apolitical “development” interventions. Second, although enormous schemes like Thaba-Tseka generally fail to achieve their planned goals, they do have the major unplanned effect of strengthening and expanding the power of politically self-serving state bureaucracies. Particularly good is the discussion of the “bovine mystique,” in which the author contrasts development experts’ misinterpretation of “traditional” attitudes toward uneconomic livestock with the complex calculus of gender, cash and power in the rural Lesotho family.

The reality was that Lesotho was not really an idyllically-rural-but-poor agricultural economy, but rather a labor reserve more or less set up by and controlled by apartheid South Africa. The gulf between the actual political situation and the situation as envisioned by the World Bank — where the main problems were lack of markets and technical solutions — at the time was enormous. This lets Ferguson have a lot of fun showing the absurdities of Bank reports from the era, and once you realize what’s going on it’s quite frustrating to read how the programs turned out, and to wonder how no one saw it coming.

This contrast between rhetoric and reality is the book’s greatest strength: because the situation is absurd, it illustrates Ferguson’s points very well, that aid is inherently political, and that projects that ignore that reality have their future failure baked in from the start. But that contrast is a weakness too, as because the situation is extreme you’re left wondering just how representative the case of Lesotho really was (or is). The 1970s-80s era World Bank certainly makes a great buffoon (if not quite a villain) in the story, and one wonders if things aren’t at least a bit better today.

Either way, this is one of the best books on development I’ve read, as I find myself mentally referring to it on a regular basis. Is the rhetoric I’m reading (or writing) really how it is? Is that technical, apolitical sounding intervention really going to work? It’s made me think more critically about the role outside groups — even seemingly benevolent, apolitical ones — have on local politics. On the other hand, the Anti-Politics Machine does read a bit like it was adapted from an anthropology dissertation (it was); I wish it could get a new edition with more editing to make it more presentable. And a less ugly cover. But that’s no excuse — if you want to work in development or international health or any related field, it should be high on your reading list.

Fugue

It took the World Bank 20 years to set up an evaluation outfit — a new paper by Michele Alacevich tells the story of how that came to pass. It’s a story about, amongst other things, the tension between academia and programs, between context-specific knowledge and generalizable lessons. The abstract:

Since its birth in 1944, the World Bank has had a strong focus on development projects. Yet, it did not have a project evaluation unit until the early 1970s. An early attempt to conceptualize project appraisal had been made in the 1960s by Albert Hirschman, whose undertaking raised high expectations at the Bank. Hirschman’s conclusions—published first in internal Bank reports and then, as a book in 1967—disappointed many at the Bank, primarily because they found it impractical. Hirschman wanted to offer the Bank a new vision by transforming the Bank’s approach to project design, project management and project appraisal. What the Bank expected from Hirschman, however, was not a revolution but an examination of the Bank’s projects and advice on how to make project design and management more measurable, controllable, and suitable for replication. The history of this failed collaboration provides useful insights on the unstable equilibrium between operations and evaluation within the Bank. In addition, it shows that the Bank actively participated in the development economics debates of the 1960s. This should be of interest for development economists today who reflect on the future of their discipline emphasizing the need for a non-dogmatic approach to development. It should also be of interest for the Bank itself, which is stressing the importance of evaluation for effective development policies. The history of the practice of development economics, using archival material, can bring new perspectives and help better understand the evolution of this discipline.

And this from the introduction:

Furthermore, the Bank all but ignored the final outcome of his project, the 1967 book, and especially disliked its first chapter…. In particular, Hirschman’s insistence on uncertainty as a structural element in the decision-making process did not fit in well with the operational drive of Bank economists and engineers.

Why’d they ignore it?

The Bank, Hirschman wrote, should avoid the “air of pat certainty” that emanated from project prospects and instead expose the uncertainties underlying them, exploring the whole range of possible outcomes. Moreover, the Bank should take into account the distributional and, more generally, the social and political effects of its lending.

It seems that one of the primary lessons of studying development economics is that many if not most of the biggest arguments you hear today already took place a generation ago. As with fashion, trends come and go, and ultimately come again. The arguments weren’t necessarily solved, they were just pushed aside when something newer and shinier came along. Even the argument against bold centrally-planned strategies — and in favor of facing up to the inherent uncertainty of complex systems — has been made before. It failed to catch on, for reasons of politics and personality. Ultimately the systems in place may not want to hear results that downplay their importance and potency the grand scheme of things. On that note it seems that if history doesn’t exactly repeat itself, it will at least continue to have some strong echoes of past debates.

Alacevich’s paper is free to download here. H/t to Andres Marroquin, who reads and shares a ridiculous number of interesting things.

14

11 2012

Another type of mystification

A long time ago (in years, two more than the product of 10 and the length of a single American presidential term) John Siegfried wrote this First Lesson in Econometrics (PDF). It starts with this:

Every budding econometrician must learn early that it is never in good taste to express the sum of the two quantities in the form: “1 + 1 = 2”.

… and just goes downhill from there. Read it.

(I wish I remembered where I first saw this so I could give them credit.)

12

09 2012

Why we should lie about the weather (and maybe more)

Nate Silver (who else?) has written a great piece on weather prediction — “The Weatherman is Not a Moron” (NYT) — that covers both the proliferation of data in weather forecasting, and why the quantity of data alone isn’t enough. What intrigued me though was a section at the end about how to communicate the inevitable uncertainty in forecasts:

…Unfortunately, this cautious message can be undercut by private-sector forecasters. Catering to the demands of viewers can mean intentionally running the risk of making forecasts less accurate. For many years, the Weather Channel avoided forecasting an exact 50 percent chance of rain, which might seem wishy-washy to consumers. Instead, it rounded up to 60 or down to 40. In what may be the worst-kept secret in the business, numerous commercial weather forecasts are also biased toward forecasting more precipitation than will actually occur. (In the business, this is known as the wet bias.) For years, when the Weather Channel said there was a 20 percent chance of rain, it actually rained only about 5 percent of the time.

People don’t mind when a forecaster predicts rain and it turns out to be a nice day. But if it rains when it isn’t supposed to, they curse the weatherman for ruining their picnic. “If the forecast was objective, if it has zero bias in precipitation,” Bruce Rose, a former vice president for the Weather Channel, said, “we’d probably be in trouble.”

My thought when reading this was that there are actually two different reasons why you might want to systematically adjust reported percentages ((ie, fib a bit) when trying to communicate the likelihood of bad weather.

But first, an aside on what public health folks typically talk about when they talk about communicating uncertainty: I’ve heard a lot (in classes, in blogs, and in Bad Science, for example) about reporting absolute risks rather than relative risks, and about avoiding other ways of communicating risks that generally mislead. What people don’t usually discuss is whether the point estimates themselves should ever be adjusted; rather, we concentrate on how to best communicate whatever the actual values are.

Now, back to weather. The first reason you might want to adjust the reported probability of rain is that people are rain averse: they care more strongly about getting rained on when it wasn’t predicted than vice versa. It may be perfectly reasonable for people to feel this way, and so why not cater to their desires? This is the reason described in the excerpt from Silver’s article above.

Another way to describe this bias is that most people would prefer to minimize Type II Error (false negatives) at the expense of having more Type I error (false positives), at least when it comes to rain. Obviously you could take this too far — reporting rain every single day would completely eliminate Type II error, but it would also make forecasts worthless. Likewise, with big events like hurricanes the costs of Type I errors (wholesale evacuations, cancelled conventions, etc) become much greater, so this adjustment would be more problematic as the cost of false positives increases. But generally speaking, the so-called “wet bias” of adjusting all rain prediction probabilities upwards might be a good way to increase the general satisfaction of a rain-averse general public.

The second reason one might want to adjust the reported probability of rain — or some other event — is that people are generally bad at understanding probabilities. Luckily though, people tend to be bad about estimating probabilities in surprisingly systematic ways! Kahneman’s excellent (if too long) book Thinking, Fast and Slow covers this at length. The best summary of these biases that I could find through a quick Google search was from Lee Merkhofer Consulting:

 Studies show that people make systematic errors when estimating how likely uncertain events are. As shown in [the graph below], likely outcomes (above 40%) are typically estimated to be less probable than they really are. And, outcomes that are quite unlikely are typically estimated to be more probable than they are. Furthermore, people often behave as if extremely unlikely, but still possible outcomes have no chance whatsoever of occurring.

The graph from that link is a helpful if somewhat stylized visualization of the same biases:

In other words, people think that likely events (in the 30-99% range) are less likely to occur than they are in reality, that unlike events (in the 1-30% range) are more likely to occur than they are in reality, and extremely unlikely events (very close to 0%) won’t happen at all.

My recollection is that these biases can be a bit different depending on whether the predicted event is bad (getting hit by lightning) or good (winning the lottery), and that the familiarity of the event also plays a role. Regardless, with something like weather, where most events are within the realm of lived experience and most of the probabilities lie within a reasonable range, the average bias could probably be measured pretty reliably.

So what do we do with this knowledge? Think about it this way: we want to increase the accuracy of communication, but there are two different points in the communications process where you can measure accuracy. You can care about how accurately the information is communicated from the source, or how well the information is received. If you care about the latter, and you know that people have systematic and thus predictable biases in perceiving the probability that something will happen, why not adjust the numbers you communicate so that the message — as received by the audience — is accurate?

Now, some made up numbers: Let’s say the real chance of rain is 60%, as predicted by the best computer models. You might adjust that up to 70% if that’s the reported risk that makes people perceive a 60% objective probability (again, see the graph above). You might then adjust that percentage up to 80% to account for rain aversion/wet bias.

Here I think it’s important to distinguish between technical and popular communication channels: if you’re sharing raw data about the weather or talking to a group of meteorologists or epidemiologists then you might take one approach, whereas another approach makes sense for communicating with a lay public. For folks who just tune in to the evening news to get tomorrow’s weather forecast, you want the message they receive to be as close to reality as possible. If you insist on reporting the ‘real’ numbers, you actually draw your audience further from understanding reality than if you fudged them a bit.

The major and obvious downside to this approach is that people know this is happening, it won’t work, or they’ll be mad that you lied — even though you were only lying to better communicate the truth! One possible way of getting around this is to describe the numbers as something other than percentages; using some made-up index that sounds enough like it to convince the layperson, while also being open to detailed examination by those who are interested.

For instance, we all the heat index and wind chill aren’t the same as temperature, but rather represent just how hot or cold the weather actually feels. Likewise, we could report some like “Rain Risk” or “Rain Risk Index” that accounts for known biases in risk perception and rain aversion. The weather man would report a Rain Risk of 80%, while the actual probability of rain is just 60%. This would give us more useful information for the recipients, while also maintaining technical honesty and some level of transparency.

I care a lot more about health than about the weather, but I think predicting rain is a useful device for talking about the same issues of probability perception in health for several reasons. First off, the probabilities in rain forecasting are much more within the realm of human experience than the rare probabilities that come up so often in epidemiology. Secondly, the ethical stakes feel a bit lower when writing about lying about the weather rather than, say, suggesting physicians should systematically mislead their patients, even if the crucial and ultimate aim of the adjustment is to better inform them.

I’m not saying we should walk back all the progress we’ve made in terms of letting patients and physicians make decisions together, rather than the latter withholding information and paternalistically making decisions for patients based on the physician’s preferences rather than the patient’s. (That would be silly in part because physicians share their patients’ biases.) The idea here is to come up with better measures of uncertainty — call it adjusted risk or risk indexes or weighted probabilities or whatever — that help us bypass humans’ systematic flaws in understanding uncertainty.

In short: maybe we should lie to better tell the truth. But be honest about it.

Another kind of "game" theory

Alan Greenspan wooed his first wife by inviting her to his apartment so that he could read her an article he had written on monopolies:

We had been discussing monopolies at dinner, and I was trying to keep her engaged. So I invited her back to my apartment to read an article I wrote in the mid-1960s. I was raising serious questions about the whole basis of the Sherman Antitrust Act of 1890, and I actually went through considerable detail as to what I thought was wrong about it. Nothing I’ve seen in reality has changed it. I’m not denying that monopolies are terrible things, but I am denying that it is readily easy to resolve them through legislation of that nature. But we were in conversation of some form or another which related to this. So I was terribly curious to see how she’d respond.

Also:

I’ve frankly forgotten how she responded.

30

08 2012

Life among the Econ

“Life among the Econ” (ungated PDF) is a classic essay by Axel Leijonhufvud that’s worth revisiting — here’s the intro:

The Econ tribe occupies a vast territory in the far North. Their land appears bleak and dismal to the outsider, and travelling through it makes for rough sledding; but the Econ, through a long period of adaptation, have learned to wrest a living of sorts from it. They are not without some genuine and sometimes even fierce attachment to their ancestral grounds, and their young are brought up to feel contempt for the softer living in the warmer lands of their neighbours such as the Polscis and the Sociogs. Despite a common genetical heritage, relations with these tribes are strained-the distrust and contempt that the average Econ feels for these neighbours being heartily reciprocated by the latter-and social intercourse with them is inhibited by numerous taboos. The extreme clannishness, not to say xenophobia, of the Econ makes life among them difficult and perhaps even somewhat dangerous for the outsider. This probably accounts for the fact that the Econ have so far-not been systematically studied….

28

08 2012

When randomization is strategic

Here’s a quote from Tom Yates on his blog Sick Populations about a speech he heard by Rachel Glennerster of J-PAL:

Glennerster pointed out that the evaluation of PROGRESA, a conditional cash transfer programme in Mexico and perhaps the most famous example of randomised evaluation in social policy, was instigated by a Government who knew they were going to lose the next election. It was a way to safeguard their programme. They knew the next Government would find it hard to stop the trial once it was started and were confident the evaluation would show benefit, again making it hard for the next Government to drop the programme. Randomisation can be politically advantageous.

I think I read this about Progresa / Oportunidades before but had forgotten it, and thus it’s worth re-sharing. The way in which Progresa was randomized (different areas were stepped into the program, so there was a cohort of folks who got it later than others, but all the high need areas got it within a few years) made this more politically feasible as well. I think this situation, in which a government institutes a study of a program to keep it alive through subsequent changes of government, will probably be a less common tactic than its opposite, in which a government designs an evaluation of a popular program that a) it thinks doesn’t work, b) it wants to cut, and c) the public otherwise likes, just to prove that it should be cut — but only time will tell.

16

08 2012

The great quant race

My Monday link round-up included this Big Think piece asking eight young economists about the future of their field. But, I wanted to highlight the response from Justin Wolfers:

Economics is in the midst of a massive and radical change.  It used to be that we had little data, and no computing power, so the role of economic theory was to “fill in” for where facts were missing.  Today, every interaction we have in our lives leaves behind a trail of data.  Whatever question you are interested in answering, the data to analyze it exists on someone’s hard drive, somewhere.  This background informs how I think about the future of economics.

Specifically, the tools of economics will continue to evolve and become more empirical.  Economic theory will become a tool we use to structure our investigation of the data.  Equally, economics is not the only social science engaged in this race: our friends in political science and sociology use similar tools; computer scientists are grappling with “big data” and machine learning; and statisticians are developing new tools.  Whichever field adapts best will win.  I think it will be economics.  And so economists will continue to broaden the substantive areas we study.  Since Gary Becker, we have been comfortable looking beyond the purely pecuniary domain, and I expect this trend towards cross-disciplinary work to continue.

I think it’s broadly true that economics will become more empirical, and that this is a good thing, but I’m not convinced economics will “win” the race. This tracks somewhat with the thoughts from Marc Bellemare that I’ve linked to before: his post on “Methodological convergence in the social sciences” is about the rise of mathematical formalism in social sciences other than economics. This complements the rise of empirical methods, in the sense that while they are different developments, both are only possible because of the increasing mathematical, statistical, and coding competency of researchers in many fields. And I think the language of convergence is more likely to represent what will happen (and what is already happening), rather than the language of a “race.”

We’ve already seen an increase in RCTs (developed in medicine and epidemiology) in economics and political science, and the decades ahead will (hopefully) see more routine serious analysis of observational data in epidemiology and other fields (in the sense that the analysis is more careful about causal inference), and  advanced statistical techniques and machine learning methodologies will become commonplace across all fields as researchers deal with massive, complex longitudinal datasets gleaned not just from surveys but increasingly from everyday collection.

Economists have a head start in that their starting pool of talent is generally more mathematically competent than other social sciences’ incoming PhD classes. But, switching back to the “race” terminology, economics will only “win” if — as Wolfers speculates will happen — it can leverage theory as a tool for structuring investigation. My rough impression is that economic theory does play this role, sometimes, but it has also held empirical investigation in economics back at times, perhaps through publication bias (see on minimum wage) against empirical results that don’t fit the theory, and possibly more broadly through a general closure of routes of investigation that would not occur to someone already trained in economic theory.

Regardless, I get the impression that if you want to be a cutting-edge researcher in any social science you should be beefing up not only your mathematical and statistical training, but also your coding practice.

Update: Stevenson and Wolfers expand their thoughts in this excellent Bloomberg piece. And more at Freakonomics here.

08

08 2012

Aid, paternalism, and skepticism

Bill Easterly, the ex-blogger who just can’t stop, writes about a conversation he had with GiveWell, a charity reviewer/giving guide that relies heavily on rigorous evidence to pick programs to invest in. I’ve been meaning to write about GiveWell’s approach — which I generally think is excellent. Easterly, of course, is an aid skeptic in general and a critic of planned, technocratic solutions in particular. Here’s an excerpt from his notes on his conversation with GiveWell:

…a lot of things that people think will benefit poor people (such as improved cookstoves to reduce indoor smoke, deworming drugs, bed nets and water purification tablets) {are things} that poor people are unwilling to buy for even a few pennies. The philanthropy community’s answer to this is “we have to give them away for free because otherwise the take-up rates will drop.” The philosophy behind this is that poor people are irrational. That could be the right answer, but I think that we should do more research on the topic. Another explanation is that the people do know what they’re doing and that they rationally do not want what aid givers are offering. This is a message that people in the aid world are not getting.

Later, in the full transcript, he adds this:

We should try harder to figure out why people don’t buy health goods, instead of jumping to the conclusion that they are irrational.

Also:

It’s easy to catch people doing irrational things. But it’s remarkable how fast and unconsciously people get things right, solving really complex problems at lightning speed.

I’m with Easterly, up to a point: aid and development institutions need much better feedback loops, but are unlikely to develop them for reasons rooted in their nature and funding. The examples of bad aid he cites are often horrendous. But I think this critique is limited, especially on health, where the RCTs and all other sorts of evidence really do show that we can have massive impact — reducing suffering and death on an epic scale — with known interventions. [Also, a caution: the notes above are just notes and may have been worded differently if they were a polished, final product — but I think they’re still revealing.]

Elsewhere Easterly has been more positive about the likelihood of benefits from health aid/programs in particular, so I find it quite curious that his examples above of things that poor people don’t always price rationally are all health-related. Instead, in the excerpts above he falls back on that great foundational argument of economists: if people are rational, why have all this top-down institutional interference? Well, I couldn’t help contrasting that argument with this quote highlighted by another economist, Tyler Cowen, at Marginal Revolution:

Just half of those given a prescription to prevent heart disease actually adhere to refilling their medications, researchers find in the Journal of American Medicine. That lack of compliance, they estimate, results in 113,00 deaths annually.

Let that sink in for a moment. Residents of a wealthy country, the United States, do something very, very stupid. All of the RCTs show that taking these medicines will make them live longer, but people fail to overcome the barriers at hand to take something that is proven to make them live longer. As a consequence they die by the hundreds of thousands every single year. Humans may make remarkably fast unconscious decisions correctly in some spheres, sure, but it’s hard to look at this result and see any way in which it makes much sense.

Now think about inserting Easterly’s argument against paternalism (he doesn’t specifically call it that here, but has done so elsewhere) in philanthropy here: if people in the US really want to live, why don’t they take these medicines? Who are we to say they’re irrational? That’s one answer, but maybe we don’t understand their preferences and should avoid top-down solutions until we have more research.

reductio ad absurdum? Maybe. On the one hand, we do need more research on many things, including medication up-take in high- and low-income countries. On the other hand, aid skepticism that goes far enough to be against proven health interventions just because people don’t always value those interventions rationally seems to line up a good deal with the sort of anti-paternalism-above-all streak in conservatism that opposes government intervention in pretty much every area. Maybe it’s a good policy to try out some nudge-y (libertarian paternalism, if you will) policies to encourage people to take their medicine, or require people to have health insurance they would not choose to buy on their own.

Do you want to live longer? I bet you do, and it’s safe to assume that people in low-income countries do as well. Do you always do exactly what will help you do so? Of course not: observe the obesity pandemic. Do poor people really want to suffer from worms or have their children die from diarrhea? Again, of course not. While poor people in low-income countries aren’t always willing to invest a lot of time or pay a lot of money for things that would clearly help them stay alive for longer, that shouldn’t be surprising to us. Why? Because the exact same thing is true of rich people in wealthy countries.

People everywhere — rich and poor — make dumb decisions all the time, often because those decisions are easier in the moment due to our many irrational cognitive and behavioral tics. Those seemingly dumb decisions usually reveal the non-optimal decision-making environments in which we live, but you still think we could overcome those things to choose interventions that are very clearly beneficial. But we don’t always. The result is that sometimes people in low-income countries might not pay out of pocket for deworming medicine or bednets, and sometimes people in high-income countries don’t take their medicine — these are different sides of the same coin.

Now, to a more general discussion of aid skepticism: I agree with Easterly (in the same post) that aid skeptics are a “feature of the system” that ultimately make it more robust. But it’s an iterative process that is often frustrating in the moment for those who are implementing or advocating for specific programs (in my case, health) because we see the skeptics as going too far. I’m probably one of the more skeptical implementers out there — I think the majority of aid programs probably do more harm than good, and chose to work in health in part because I think that is less true in this sector than in others. I like to think that I apply just the right dose of skepticism to aid skepticism itself, wringing out a bit of cynicism to leave the practical core.

I also think that there are clear wins, supported by the evidence, especially in health, and thus that Easterly goes too far here. Why does he? Because his aid skepticism isn’t simply pragmatic, but also rooted in an ideological opposition to all top-down programs. That’s a nice way to put it, one that I think he might even agree with. But ultimately that leads to a place where you end up lumping things together that are not the same, and I’ll argue that that does some harm. Here are two examples of aid, both more or less from Easterly’s post:

  • Giving away medicines or bednets free, because otherwise people don’t choose to invest in them; and,
  • A World Bank project in Uganda that “ended up burning down farmers’ homes and crops and driving the farmers off the land.”

These are a both, in one sense, paternalistic, top-down programs, because they are based on the assumption that sometimes people don’t choose to do what is best for themselves. But are they the same otherwise? I’d argue no. One might argue that they come from the same place, and an institution that funds the first will inevitably mess up and do the latter — but I don’t buy that strong form of aid skepticism. And being able to lump the apparently good program and the obviously bad together is what makes Easterly’s rhetorical stance powerful.

If you so desire, you could label these two approaches as weak coercion and strong coercion. They are both coercive in the sense that they reshape the situations in which people live to help achieve an outcome that someone — a planner, if you will — has decided is better. All philanthropy and much public policy is coercive in this sense, and those who are ideologically opposed to it have a hard time seeing the difference. But to many of us, it’s really only the latter, obvious harm that we dislike, whereas free medicines don’t seem all that bad. I think that’s why aid skeptics like Easterly group these two together, because they know we’ll be repulsed by the strong form. But when they argue that all these policies are ultimately the same because they ignore people’s preferences (as demonstrated by their willingness to pay for health goods, for example), the argument doesn’t sit right with a broader audience. And then ultimately it gets ignored, because these things only really look the same if you look at them through certain ideological lenses.

That’s why I wish Easterly would take a more pragmatic approach to aid skepticism; such a form might harp on the truly coercive aspects without lumping them in with the mildly paternalistic. Condemning the truly bad things is very necessary, and folks “on the inside’ of the aid-industrial complex aren’t generally well-positioned to make those arguments publicly. However, I think people sometimes need a bit of the latter policies, the mildly paternalistic ones like giving away medicines and nudging people’s behavior — in high- and low-income countries alike. Why? Because we’re generally the same everywhere, doing what’s easiest in a given situation rather than what we might choose were the circumstances different. Having skeptics on the outside where they can rail against wrongs is incredibly important, but they must also be careful to yell at the right things lest they be ignored altogether by those who don’t share their ideological priors.