Eight years ago, the L.A. Times published teachers’ ratings. New research tells us what happened next.

In 2010, the Los Angeles Times did something that hadn’t been done before.

The newspaper published test-score data for thousands of the city’s public school teachers, assigning them a rating based on how they influenced students’ results.

It caused a firestorm. Critics, including many teachers, railed against the measures as misleading and poorly constructed, warning that the data would demoralize teachers. The L.A. Times itself defended the release as necessary transparency.

In an accompanying story, one local teacher suggested it might also help children by empowering parents “to demand a good teacher.”

New research suggests that’s what happened next — but only for certain families.

Publishing the scores meant already high-achieving students were assigned to the classrooms of higher-rated teachers the next year, the study found. That could be because affluent or well-connected parents were able to pull strings to get their kids assigned to those top teachers, or because those teachers pushed to teach the highest-scoring students.

In other words, the academically rich got even richer — an unintended consequence of what could be considered a journalistic experiment in school reform.

“You shine a light on people who are underperforming and the hope is they improve,” said Jonah Rockoff, a professor at Columbia University who has studied these “value-added” measures. “But when you increase transparency, you may actually exacerbate inequality.”

That analysis is one of a number of studies to examine the lasting effects of the L.A. Times’ decision to publish those ratings eight years ago. Together, the results offer a new way of understanding a significant moment in the national debate over how to improve education, when bad teachers were seen as a central problem and more rigorous evaluations as a key solution.

The latest study, by Peter Bergman and Matthew Hill and published last month in the peer-reviewed journal Economics of Education Review, found that the publication of the ratings caused a one-year spike in teacher turnover. That’s not entirely surprising, considering many teachers felt attacked by the public airing of their ratings.

“Guilty as charged,” wrote one teacher with a low rating. “I am proud to be ‘less effective’ than some of my peers because I chose to teach to the emotional and academic needs of my students. In the future it seems I am being asked to put my public image first.”

But a separate study, by Nolan Pope at the University of Maryland, finds the publication of the ratings may have had some positive effects on students, perhaps by encouraging schools to better support struggling teachers.

Pope’s research showed that Los Angeles teachers’ performance, as measured by their value-added scores, improved after their scores were published. The effects were biggest for the teachers whose initial scores were lowest, and there was no evidence that the improvement was due to “teaching to the test.”

“These results suggest the public release of teacher ratings could raise the performance of low-rated teachers,” Pope concluded.

The two studies offer divergent pictures of the consequences of L.A. Times’ move. Pope did not find that higher-scoring students moved into the classrooms of higher-scoring teachers, while Bergman and Hill didn’t find clear evidence that teachers improved.

Those varying results are not entirely surprising, since the researchers used different methods. Pope’s research compared the same teachers before and after their value-added scores were published. Bergman and Hill took advantage of the fact that the L.A. Times only published scores for teachers who taught 60 or more students between 2003 and 2009, creating a natural experiment. The researchers then compared teachers who had taught just more than 60 kids to those teachers who had taught just under 60.

Rockoff of Columbia said he found both studies credible.

A third study, published in 2016, looks at an entirely different question: Did housing prices in Los Angeles increase near schools with more highly rated teachers?

Not really, according to the paper. That’s somewhat surprising, because past research has shown that housing zoned for schools with higher overall test scores and ratings is more expensive.

The researchers suggest that that might be because families had a hard time understanding what the ratings represented, and that some may have tuned out because of the surrounding controversy.

The results come in a very different political climate than around the time of the public release of the scores, when conversations about teacher performance had reached a fever pitch.

In 2010, then-Secretary of Education Arne Duncan praised the publication of teacher ratings. He used federal carrots and sticks to encourage states to use student test scores as part of how teachers are judged, a policy most states adopted.

But since then, states like New York and Virginia have barred the public release of this performance data, while media organizations have increasingly shied away from publicizing them. The new teacher evaluation systems have run into political challenges, and in some cases not had the hoped-for effects on student performance. And the federal education law passed in 2016 specifically banned the secretary of education from pushing teacher evaluation rules.