Education news. In context.
Diversity & Equity
Politics & Policy
Teaching & Classroom
Student & School Performance
Leadership & Management
Charters & Choice
Find a Job
How to be a Chalkbeat source
Republish Our Stories
Code of Ethics
Our News Partners
Work with Us
February 21, 2019
This personalized learning program was supposed to boost math scores. It didn’t, new study finds
A program that Bill Gates once called “the future of math” didn’t improve state test scores at schools that adopted it, according to a new study.
August 5, 2018
Eight years ago, the L.A. Times published teachers’ ratings. New research tells us what happened next.
"You shine a light on people who are underperforming and the hope is they improve. But when you increase transparency, you may actually exacerbate inequality."
July 12, 2017
Some New York charter schools could soon be allowed to certify their own teachers. What could that look like?
A handful of training programs at charter schools may soon substitute for the formal state certification process.
June 18, 2013
State to use a "value-added" growth model without calling it that
State test scores won't count more toward the evaluations of elementary and middle school teachers next year, according to an amended proposal that a Board of Regents committee passed unanimously on Monday. The proposed model, which was formally approved on Tuesday, included a methodology to calculate student growth that was nearly identical to the "value-added" model that State Education Commissioner John King brought to the board in April. Both models add new data points to the formula used to approximate how much each teacher has contributed to students' growth. But under state law, any model termed "value-added" would have required, controversially, that its weight increase from 20 to 25 percent on some teacher evaluations. King's alternative this month was for the state to adopt an “enhanced growth model” that adds virtually all of the same data points but doesn’t have the value-added moniker. Spurning the name allows the state to avoid increasing the weight of test scores until all districts have at least one year of implementation under their belts, something the state teachers union has asked for. "I would have thought that adding all these factors would qualify as 'value-added,' but this distinction was always opaque," said Jonah Rockoff, a Columbia University economist who advised the state on its methodology "If the commissioner wants to keep the weight at 20 percent for another year then staying within the 'student growth' framework seems like the simplest way to do it."
February 29, 2012
Why it's no surprise high- and low-rated teachers are all around
The New York Times' first big story on the Teacher Data Reports released last week contained what sounded like great news: After years of studies suggesting that the strongest teachers were clustered at the most affluent schools, top-rated teachers now seemed as likely to work on the Upper East Side as in the South Bronx. Teachers with high scores on the city's rating system could be found "in the poorest corners of the Bronx, like Tremont and Soundview, and in middle-class neighborhoods," "in wealthy swaths of Manhattan, but also in immigrant enclaves," and "in similar proportions in successful and struggling schools," the Times reported. Education analyst Michael Petrilli called the findings "jaw-dropping news" that "upends everything we thought we knew about teacher quality." Except it's not really news at all. Value-added measurements like the ones used to generate the city's Teacher Data Reports are designed precisely to control for differences in neighborhood, student makeup, and students' past performance. The adjustments mean that teachers are effectively ranked relative to other teachers of similar students. Teachers who teach similar students, then, are guaranteed to have a full range of scores, from high to low. And, unsurprisingly, teachers in the same school or neighborhood often teach similar students. “I chuckled when I saw the first [Times story], since the headline pretty much has to be true: Effective and ineffective teachers will be found in all types of schools, given the way these measures are constructed,” said Sean Corcoran, a New York University economist who has studied the city’s Teacher Data Reports.
February 23, 2012
Why we won't publish individual teachers' value-added scores
Tomorrow's planned release of 12,000 New York City teacher ratings raises questions for the courts, parents, principals, bureaucrats, teachers — and one other party: news organizations. The journalists who requested the release of the data in the first place now must decide what to do with it all. At GothamSchools, we joined other reporters in requesting to see the Teacher Data Reports back in 2010. But you will not see the database here, tomorrow or ever, as long as it is attached to individual teachers' names. The fact is that we feel a strong responsibility to report on the quality of the work the 80,000 New York City public school teachers do every day. This is a core part of our job and our mission. But before we publish any piece of information, we always have to ask a question. Does the information we have do a fair job of describing the subject we want to write about? If it doesn't, is there any additional information — context, anecdotes, quantitative data — that we can provide to paint a fuller picture? In the case of the Teacher Data Reports, "value-added" assessments of teachers' effectiveness that were produced in 2009 and 2010 for reading and math teachers in grades 3 to 8, the answer to both those questions was no. We determined that the data were flawed, that the public might easily be misled by the ratings, and that no amount of context could justify attaching teachers’ names to the statistics. When the city released the reports, we decided, we would write about them, and maybe even release Excel files with names wiped out. But we would not enable our readers to generate lists of the city’s “best” and “worst” teachers or to search for individual teachers at all. It's true that the ratings the city is releasing might turn out to be powerful measures of a teacher's success at helping students learn. The problem lies in that word: might.
February 19, 2009
Getting an F or a D led schools to assign fewer essays, projects
When the Bloomberg administration announced it would assign every public school a letter grade, based largely on test scores, critics worried the grades would lead to a "drill and kill" approach to teaching. Forced to raise test scores, they said, schools might avoid teaching creativity and problem-solving in favor of focusing on basic skills. New research suggests that the critics worries may have come true — but the researchers don't think that's necessarily a bad thing. Jonah Rockoff, a professor at Columbia business school who has been studying the Bloomberg administration's accountability system, presented the finding today at a lunch at New York University. It's part of a paper whose central conclusion — that grading schools with D's and F's led schools to improve their test scores — was publicized last year. But the paper has many other interesting aspects, and Rockoff's research is continuing. Today, I'll stick to the "back to the basics" idea; future posts will tackle other areas of interest. Rockoff's paper draws three conclusions about schools tacked with D's and F's that lead to the "back to the basics" conclusion. In the months after getting the failing grades, these schools 1) spent less time on work that involved essays and projects; 2) saw an increase in emphasis on using test score data to make decisions about curriculum; and 3) were less likely to have teachers report that their administrators' focused on teaching quality.
November 11, 2008
For most students, no benefit to a school's F grade, study finds
A study examining whether getting poor grades on city progress reports prompted schools to improve their students' test scores found little evidence of such a boost. The study, released today by the conservative-leaning Manhattan Institute, asked the question by comparing schools with progress report raw scores that were roughly the same, but just different enough to get different letter grades. In fact the two groups showed about the same amount of progress — except in fifth-grade math, where students in failing schools made "significant and substantial improvement" compared to their peers in schools that had been assigned a grade of D, according to the study. The progress reports assign letter grades to schools based primarily on improvements in students' test scores. Since the first reports were released a year ago, the program has been the subject of sustained criticism: Parents and teachers have complained about unfair stigmatization of good schools, and statisticians have charged that the reports are driven as much by error as by actual school improvement. The study's architect, Manhattan Institute senior fellow Marcus Winters, called his findings "mixed-positive" in favor of the progress reports. Those findings were the subject this morning of a panel discussion sponsored by the Manhattan Institute featuring Winters, Columbia University economist Jonah Rockoff, and two officials from the Department of Education's accountability office, including its CEO, James Liebman.
In your inbox.
Chalkbeat New York
How I Teach
Ready or Not
Rise & Shine Colorado
Rise & Shine Detroit
Rise & Shine Indiana
Rise & Shine Tennessee
The Starting Line