It has become a talking point for Betsy DeVos and a powerful example of the challenges of turning around struggling schools: a national study, released by the federal government, showing that its multibillion-dollar turnaround program failed.
“The previous administration spent seven billion of your dollars on ‘School Improvement Grants,’ thinking they could demonstrate that money alone would solve the problem,” DeVos said soon after becoming education secretary. “They tested their model, and it failed miserably.”
A new report says, not so fast. It points to studies of places like San Francisco, where the approach seemed to help students, and the limitations of the government’s study to conclude that the federal report painted too grim a picture.
“The autopsy on the grant program is flawed and its core conclusion faulty,” says the analysis, released by FutureEd, a Georgetown-based think tank generally supportive of Obama-era education policies.
It’s a year-old debate that remains relevant, as the study has become a touchstone for the idea that the federal government is unable to help long-struggling schools improve. And it comes as states, now with more freedom, are grappling with how to intervene in their lowest performing schools.
The federal turnaround program, known as School Improvement Grants or SIG, was a signature initiative of the Obama administration. In exchange for federal money, schools had to make changes, but had to use one of four approaches. About three-quarters chose the least disruptive option: firing the principal and making adjustments like lengthening the school day or toughening their teacher evaluations. Others replaced the principal and half the teaching staff. Few chose the other options, turning a school over to charter school operator or closing it altogether.
The initiative was evaluated by the external research firm Mathematica and the American Institutes for Research and released by the education department’s research arm, the Institute of Education Sciences. The results, released in January 2017, weren’t pretty.
“There was also no evidence that SIG had significant impacts on math or reading test scores, high school graduation, or college enrollment,” the study said.
The FutureEd report, written by two former Department of Education officials, suggests those conclusions are flawed for a few reasons, some of which were noted when the study was first released.
For one, it points out that even if School Improvement Grants successfully improved students’ academic outcomes, that would have been hard for the study to detect because the study’s bar for “statistical significance” was a very high one to clear.
The study actually estimated that the grants led to modest boosts in reading test scores and small declines in high school graduation rates — but neither impact was statistically significant, which is what the researchers mean when they say they found “no evidence.”
One of the report’s researchers, Lisa Dragoset of Mathematica, defended its approach. Setting a high bar for significance is reasonable for a program, like SIG, that was quite costly, she said. And even ignoring statistical significance, the estimated gains in reading were small and essentially zero in math, she noted.
“It’s unlikely that there were substantive or large impacts that were undetected by our study,” she said.
Second, FutureEd points out that the subset of schools in the federal study were not representative of all schools receiving federal grants.
The study compared schools receiving SIG grants to those schools near the eligibility cutoff that didn’t get grants. This is a widely used approach, but the FutureEd authors point out that the results then don’t say much about the effects of the grants on the lowest-performing schools. The studied schools were also disproportionately urban.
The new report also highlights a number of studies that focus on specific states and cities which paint a much more positive picture of the initiative. Most, though not all, of these studies find that the grants had positive effects on test scores.
“There are legitimate questions of whether the SIG program represented the best way to use federal funding to improve struggling schools,” the FutureEd authors conclude. “But it is wrong to suggest that there was no return on the SIG investment.”
Dragoset acknowledges that national results might not apply to all SIG schools, but says there’s nothing inconsistent with seeing no clear national effect but positive results in certain states.
“Ours, to my knowledge, is the only large-scale, rigorous study of the SIG program nationwide,” she said.
Dan Goldhaber, a University of Washington professor and vice president at the American Institutes for Research who reviewed the FutureEd report, said the limits of the federal study suggest policymakers shouldn’t “jump to the conclusion that the SIG program didn’t work.” But the study “was competently done,” he said.
The FutureEd analysis concludes that the U.S. Department of Education should spearhead a review of the research on turning around struggling schools. Tom Dee, a Stanford professor who found that the federal grants led to higher test scores in California schools, echoed this.
“I believe the question we should be asking is the following: why do federal reform catalysts seem to generate positive change in some states and communities but mere cosmetic regulatory compliance in others?” he said. “It seems to me that knowing more about the answer to that question is critical to efforts to drive meaningful change at scale.”