Head Start Effectiveness Varies — the Important Question Is Why
By Katharine B. Stevens
BLOG
December 3, 2014
Now that the Child Care and Development Block Grant Act has finally been reauthorized, it seems likely that attention will eventually turn to the overdue reauthorization of Head Start, the 50-year-old federal preschool program serving about a million disadvantaged 3- and 4-year-olds each year. Researchers and policy analysts continue to debate whether the benefits of Head Start, the largest early childhood program in the US, justify its $8 billion annual cost. In the “no” camp, skeptics frequently cite the disappointing results of a recent randomized evaluation of Head Start as evidence that large-scale preschool programs are not a worthwhile investment of scarce public dollars. The Head Start Impact Study (HSIS), released in 2010, indeed concluded that Head Start had just limited positive effects on children during their Head Start participation and that by the time those children were in first grade any effects had disappeared.
But the HSIS investigated only the aggregate impact of a nationally representative sample of almost 400 Head Start centers across the country, ignoring any variation in program effectiveness as well as its potential causes. The study’s findings on Head Start’s average effects thus obscure any differences in effectiveness among individual Head Start centers. In other words, it may be that some Head Start centers are effective even as others are not.
An important new study published a few weeks ago by Christopher Walters, a professor of economics at the University of California at Berkeley, reanalyzed the HSIS data to address just this question. As common sense might suggest, he found that “some Head Start programs are substantially more effective than others.” He then investigated potential causes of that variation, examining whether several input variables conventionally identified as key to preschool quality—type of curriculum, whether teachers have a bachelors’ degree, class size, instructional time, home visiting, and center director experience—were in fact associated with more effective programs.
Of the program variables Walters analyzed, however, only instructional time and home visiting were significantly related to greater effectiveness: centers offering full day programs and at least four home visits per year had greater positive effects on their students. Moreover, the commonly-used quality metrics Walters tested for explained only about a third of the variation in short-run cognitive effects across Head Start centers. This suggests that other, unspecified variables are crucial. One likely possibility, for example, is the way teachers actually interact with students, which has been identified by recent research as the single most important driver of preschool program quality.
Walters’ study sheds new light on what’s important—and what’s not—to high-quality Head Start programs, and underscores that the real Head Start picture may well be more nuanced than either advocates or critics often acknowledge. When Head Start does come up for reauthorization, policymakers should pay less attention to sweeping generalizations about a huge program and more to the specifics of what works best and why.