What does social science know?

Marc Bellemare wrote a post “For Fellow Teachers: Revised Primers on Linear Regression and Causality.” Good stuff for students too — not just teachers. The primers are PDFs on linear regression (6 pages) and causality (3 pages), and they’re either 1) a concise summary if you’re studying this stuff already, or 2) something you should really read if you don’t have any background in quantitative methods.

I also really enjoyed an essay by Jim Manzi that Marc links to, titled “What Social Science Does — and Doesn’t — Know.” Manzi reviews the history of experimentation in natural sciences, and then in social sciences. He discusses why it’s more difficult to extrapolate from randomized trials in the social sciences due to greater ‘causal density,’ amongst other reasons. Manzi summarized a lot of research in criminology (a field I didn’t even know used many field trials) and ends with some conclusions that seem sharp (emphasis added):

…After reviewing experiments not just in criminology but also in welfare-program design, education, and other fields, I propose that three lessons emerge consistently from them.

First, few programs can be shown to work in properly randomized and replicated trials. Despite complex and impressive-sounding empirical arguments by advocates and analysts, we should be very skeptical of claims for the effectiveness of new, counterintuitive programs and policies, and we should be reluctant to trump the trial-and-error process of social evolution in matters of economics or social policy.

Second, within this universe of programs that are far more likely to fail than succeed, programs that try to change people are even more likely to fail than those that try to change incentives. A litany of program ideas designed to push welfare recipients into the workforce failed when tested in those randomized experiments of the welfare-reform era; only adding mandatory work requirements succeeded in moving people from welfare to work in a humane fashion. And mandatory work-requirement programs that emphasize just getting a job are far more effective than those that emphasize skills-building. Similarly, the list of failed attempts to change people to make them less likely to commit crimes is almost endless—prisoner counseling, transitional aid to prisoners, intensive probation, juvenile boot camps—but the only program concept that tentatively demonstrated reductions in crime rates in replicated RFTs was nuisance abatement, which changes the environment in which criminals operate….

I’d note here that many researchers and policymakers who are interested in health-related behavior change have been moving away from simply providing information or attempting to persuade people to change their behavior, and moving towards changing the unhealthy environments in which we live. NYC Health Commissioner Thomas Farley spoke explicitly about this shift in emphasis when he addressed us summer interns back in June. That approach is a direct response to frustration with the small returns from many behavioral intervention approaches, and an acknowledgment that we humans are stubborn creatures whose behavior is shaped (more than we’d like to admit) by our environments.

Manzi concludes:

And third, there is no magic. Those rare programs that do work usually lead to improvements that are quite modest, compared with the size of the problems they are meant to address or the dreams of advocates.

Right, no pie in the sky. If programs or policies had huge effects they’d be much easier to measure, for one. Read it all.

August

18

2011

1 Comments Add Yours ↓

The upper is the most recent comment

  1. Sarah #
    1

    Word.