School improvement programs have sprouted up around the country to help turn around long-struggling schools. And the money that comes with that has also spurred the growth of external groups offering their services to schools.
But there’s little evidence on whether that school improvement industry, paid for by taxpayers, is actually boosting student learning, according to a study of 151 turnaround providers endorsed by various state agencies.
It’s a striking finding in light of the significant role third-party groups play in supporting these efforts. But it’s not entirely surprising, considering the disappointing (albeit still debated) track record of turnaround efforts across the country, and anecdotal reports that providers with little experience emerged once the spigot of federal dollars was turned on.
There’s also a less damning explanation: Research is expensive and difficult to undertake. Programs may be successful, even if no study exists to prove it.
Still, the results raise questions about the role of third-party turnaround providers, which may continue as federal education law requires turnaround efforts in each state’s lowest performing 5 percent of schools.
This growth of outside groups has likely come because schools have had limited support to implement turnaround strategies — including from state education departments, many of whom worked with or encouraged use of external providers. In the 2012-13 school year, for instance, 34 states and two-thirds of districts implementing federal turnarounds reported contracting with external consultants to support those efforts.
“Our results suggest that states have not prioritized evidence of impact when endorsing providers to work in their schools and districts,” wrote Coby Meyers and Bryan VanGronigen of the University of Virginia.
The paper, published last month in the peer-reviewed American Journal of Education, examined 13 state websites from 2015 that listed recommended turnaround providers for struggling schools. From there, the researchers created a list of 151 state-backed providers — third-party groups that offer a variety of services to low-performing schools, including professional development, teacher recruitment, extra learning time, and improving assessment, parental involvement, and use of technology.
The researchers combed through those providers’ websites, academic research databases, and even sent emails to each organization, to see if they had research showing that their programs had led to better test scores or graduation rates.
Few did. Only 11 percent of the 151 had research backing their specific program, and then only some of those studies were focused on school turnarounds. That’s especially notable because these were all organizations recommended by state agencies, though the researchers point out, “almost no research exists on how [states] recruit, vet, endorse, or evaluate providers.”
Notably, only half of service providers reviewed claimed on their websites that their services were based in research. This meant only that the group said that its general approach was supported by research, rather than that the particular program was.
In some ways the results are no surprise, partially because of the fairly high bar being set — to be counted as having evidence, a program had to show gains based on a study that attempted to isolate cause and effect. But such studies are expensive, and researchers may be less likely to evaluate one-off programs on their own. Also, certain approaches are simply hard to evaluate. That means many programs have never been studied at all.
One of the providers that did have supportive evidence, based on a study in Ohio, was a University of Virginia program known as Partnership for Leaders in Education. That’s also where Coby Meyers, one of the researchers behind the latest study, works as chief research officer, in addition to being an associate professor at the University Virginia.
Others found to have supporting evidence included Success for All, a schoolwide turnaround program that includes efforts to strengthen professional development and curriculum; City Connects, a Boston-based community schools group, and eMINTS, which focuses on the use of technology in schools.
The researchers say it’s puzzling that so few had strong evidence since they were endorsed by state agencies. “Given how few providers had evidence of impact and how many were not research based, a key question arises: What is the rationale for endorsing providers that lack evidence of impact?” they write.
Word of mouth may play a big role in schools’ decision making. “Schools tend to contract with providers used by other schools in their own districts in the past, regardless of past performance,” according to an older study that looked at schools in Texas in the early 2000s that received federal funds to conduct what was called “comprehensive school reform.”
There’s a still great deal that’s not known about these external providers, including how much money is spent on them.
Meanwhile, turning around struggling schools remains vexing for policymakers. High-profile efforts, including by the federal government and New York City, have proven largely disappointing, though each can point to some bright spots.
The funds designated under federal law to help struggling schools are supposed to go toward “evidence-based interventions.” The standard for what counts as evidence is not especially high, though, focusing on the idea rather than the provider behind it.
Nora Gordon a professor at Georgetown who has researched such third-party groups, said their likelihood of succeeding won’t just be determined by what they plan to do, but also on local context and how they implement it.
“There could be two providers who sell intervention X and one does a good job and one does a bad job,” she said. “If you think about ESSA, they’re both going to say we’re using this research-based strategy.”