Hechinger Report

New York

Q&A: Klein disciple Nadelstern laments end of disruptive era

As Mayor Bloomberg’s term in office comes to an end in New York City, mayoral candidates have been quick to denounce many of his education policies. A recent poll found that a majority of residents disapprove of the outgoing mayor’s handling of public schools, and the current crop of candidates are unhappy with school closures and the school grading system currently in place. The Bloomberg administration can count Eric Nadelstern, former deputy chancellor for school support and instruction under Bloomberg and currently a professor of Practice in Educational Leadership at Teachers College, Columbia University, as one of its staunchest defenders. Nadelstern spoke to The Hechinger Report about his thoughts on the future of public education in New York City and his recent book 10 Lessons From New York City Schools, about his 40 years of experience working in public education. Question: There’ll be a new mayor in the city soon. Any trepidation that some of the policies you talk favorably about in your book might end? Answer: Sad to say, but I think they’ve changed already under the old mayor. I see networks being redirected away from school support to more central office compliance matters which disturbs me. I see the core curriculum being mandated in a way that was reminiscent of the old days in the way superintendents mandate curriculum rather than rolled it out in a way that creates a lot of options for schools on how to creatively engage around it or not if they choose to. And those decisions and policies trouble me. Certainly under a new mayor I think two main areas in greatest jeopardy are the issues of school closings that also creates the opportunity to open new schools as well as whether the non-geographic network structure may return to the old-time district structure headed by superintendents. Politicians in particular favor the old structure because they could exploit it to their benefit more easily. Q: What changes are you talking about?
New York

Nationally, federal turnaround funding generates mixed reviews

New York City's controversial school turnaround proposals represent a tiny piece of a sweeping effort, funded by the U.S. Department of Education, to overhaul the country's lowest-performing schools. In the first of three articles about the reform effort produced by Education Week, The Hechinger Report, and the Education Writers Association, Alyson Klein examines the effects of federal School Improvement Grants on districts across the country — and the grants' uncertain future. GothamSchools was one of a dozen news organizations to contribute to the reporting. After two years, the federal program providing billions of dollars to help states and districts close or remake some of their worst-performing schools remains an ambitious work in progress, with roughly 1,200 turnaround efforts under way but still no verdict on its effectiveness. The School Improvement Grant (SIG) program, supercharged by a $3 billion windfall under the federal economic-stimulus program in 2009, has jumpstarted aggressive moves by states and districts. To get their share of the money, they had to quickly identify some of their most academically troubled schools, craft new teacher-evaluation systems, and carve out more time for instruction, among other steps. Some schools and districts spent millions of dollars on outside experts and consultants. Others went through the politically ticklish process of replacing teachers and principals, while combating community skepticism and meeting the demands of district and state overseers. It’s not at all clear if the federal prescription can cure the most ailing schools and lead to long-term improvements, but preliminary student achievement data for the program offer some promise. The U.S. Department of Education looked at about 700 of the schools in their second year of the program and found that a quarter of them posted double-digit gains in math during the 2010-11 school year. Another 20 percent showed similar progress in reading. A collaborative reporting project drawing on the efforts of more than 20 news organizations and affiliated journalists paints a mixed picture of how the SIG program is playing out on the ground. The major findings show:
New York

Integral to "value-added" is a requirement that some score low

Add one more point of critique to the city’s Teacher Data Reports: Experts and educators are worried about the bell curve along which the teacher ratings fell out. Like the distribution of teachers by rating across types of schools, the distribution of scores among teachers was essentially built into the “value-added” model that the city used to generate the ratings. The long-term goal of many education reformers is to create a teaching force in which nearly all teachers are high-performing. However, in New York City’s rankings — which rated thousands of teachers who taught in the system from 2007 to 2010 — teachers were graded on a curve. That is, under the city’s formula, some teachers would always be rated as “below average,” even if student performance increased significantly in all classrooms across the city. The ratings were based on a complex formula that predicts how students will do — after taking into account background characteristics — on standardized tests. Teachers received scores based on students’ actual test results measured against the predictions. They were then divided into five categories. Half of all teachers were rated as “average,” 20 percent were “above average,” and another 20 percent were “below average.” The remaining 10 percent were divided evenly between teachers rated as “far above average” and “far below average.” IMPACT, the District of Columbia’s teacher-evaluation system, also uses a set distribution for teacher ratings. As sociologist Aaron Pallas wrote in October 2010, “by definition, the value-added component of the D.C. IMPACT evaluation system defines 50 percent of all teachers in grades four through eight as ineffective or minimally effective in influencing their students’ learning.”
New York

City's value-added initiative early entrant to evolving landscape

New York City schools erupted in controversy last week when the school district released its “value-added” teacher scores to the public after a yearlong battle with the local teachers union. The city cautioned that the scores had large margins of error, and many education leaders around the country believe that publishing teachers’ names alongside their ratings is a bad idea. Still, a growing number of states are now using evaluation systems based on students’ standardized test-scores in decisions about teacher tenure, dismissal, and compensation. So how does the city’s formula stack up to methods used elsewhere? The Hechinger Report has spent the past 14 months reporting on teacher-effectiveness reforms around the country and has examined value-added models in several states. New York City’s formula, which was designed by researchers at the University of Wisconsin-Madison, has elements that make it more accurate than other models in some respects, but it also has elements that experts say might increase errors — a major concern for teachers whose job security is tied to their value-added ratings. “There’s a lot of debate about what the best model is,” said Douglas Harris, an expert on value-added modeling at the University of Wisconsin-Madison who was not involved in the design of New York’s statistical formula. The city used the formula from 2007 to 2010 before discontinuing it, in part because New York State announced plans to incorporate a different formula into its teacher evaluation system.
New York

Federal Head Start reauthorization puts city's status in jeopardy