Pages & Random Thoughts

Saturday, July 18, 2015

Delran Education Association's Statement on NJDOE's Teacher Evaluation Data


You may have seen this week that the New Jersey Department of Education released teacher evaluation data from the 2013-14 school year. There have been headlines like "Which NJ School Districts Have The Most "Ineffective" Teachers?" and "Hundreds of Teachers in Monmouth, Ocean Counties Rate Poorly on State Teacher Evaluations."  They really get your attention, right? Sounds like the paper should have been delivered with a free pitchfork. 

But what do the ratings actually mean? What are the circumstances of the students in those districts? Will this have any bearing, at all, on how your child does in school? Can a teacher be "effective" one year and not the next? Is your child's special ed teacher measured the same way as the gifted & talented teacher, or for that matter, the regular ed teacher? Does it matter? 

Given the kind of blog I have, you won't be surprised that I think the teacher evaluation process is deeply flawed. I am horrified that anyone thinks they can quantify my daughter's annual progress, without actually looking at her work, and just using formulas and standardized tests, call it the "effectiveness" of her teachers. It sounds ridiculous because it is. 

Fortunately, in New Jersey, there are some very sane voices to counter the crazy and the hysterical. The collective voice of the Delran Education Association being among them. They released a statement yesterday on the evaluation system. For the folks who are fortunate enough to live in the sparkly districts of this state, you won't find many teachers being classified as "ineffective." Delran is no different. What does set this Education Association apart is strong leadership and the willingness to not sit down and shut up. They are fighting for their profession and for the education of your children. Please read their statement, consider it, and share it.

On Wednesday, July 15th, the New Jersey Department of Education released the 2013-2014 results of the state’s new teacher evaluation system.

As educators and members of the Delran Education Association, we are deeply committed to providing the best education possible to every child in our care—and the data published on Wednesday suggests that we are successful in doing so.

However, the new teacher evaluation system is so inherently flawed that the “results” released for schools across the state are not, in any way, an accurate representation of educator or district effectiveness. As such, any comparisons between school districts, between schools within districts, or even between teachers in the very same building are virtually meaningless.

There are many variables that have been consistently proven to affect a student’s ability to learn, and the Department of Education’s report ignores virtually all of them. The report does not show, for example, how many students with special needs each district serves–or whether or not those students receive the supports they need. It does not show how many students in any given district are non-native English speakers. It does not show whether schools are underfunded or adequately funded. It does not show whether the majority of a district’s students are living above or below the poverty line. And it does not show whether classes are overcrowded or adequately-sized. Already, and not surprisingly, news stories have emerged with headlines asking, “Which N.J. School Districts Have the Most ‘Ineffective’ Teachers?” It is no coincidence that most of the districts at the top of the list are grossly underfunded and understaffed, serve many students who live in poverty, have high numbers of special-needs and ELL students, and are in areas in which privatization efforts—driven by the narrative that teachers and schools are “failing”—are evident at every turn. No one should be surprised when the results from this meaningless evaluation report are used to justify such privatization efforts.

Further, the Department of Education’s report does not acknowledge the subjectivity involved in the interpretation of the approved evaluation frameworks (i.e. Danielson, McRel, Stronge, Marshall, etc.). Even within a single building and using the same evaluation model, there can exist great inconsistency among the administrators tasked with “evaluating” teachers by subjectively interpreting generic rubrics that reduce the art of teaching and learning to a checklist. Indeed, two administrators watching the same teacher and referring to the same rubric often arrive at and assign very different scores, since interpretation of even the most specific—or standardized—rubrics involves subjectivity. An administrative team can certainly manipulate a rubric to achieve the types of scores they would like to see, and teachers across the state have reported such practices in their own buildings. Many teachers report being evaluated by an administrator who prides him/herself as being the “lowest scorer” among the administrative team, for instance, while others receive predictably-good marks from administrators who consistently assign high scores. For this reason, many local associations, like our own, have begun to track patterns in evaluators’ scoring—and have found unequivocal evidence that many evaluators’ scores can be predicted fairly accurately to a certain extent. According to the Department of Education’s report, one Mercer county school district rated 94% of its teachers as highly effective—whereas in our district, only 30% attained this level. Is either of these statistics meaningful or accurate if we can document and demonstrate evaluator bias?

Make no mistake: the solution to this problem is not to rely more heavily on the use of scores on equally flawed standardized tests to evaluate teachers–because ultimately, just as standardized test scores are not accurate or appropriate measures of student learning, they cannot and should not be used to determine teacher effectiveness. (Unfortunately, some teachers were evaluated using students’ scores on the NJASK—a test the Department of Education now says was virtually worthless—in the 2013-2014 academic year: another flawed aspect of the new teacher evaluation system.)

Ultimately, we believe that the best way for parents, taxpayers, and other stakeholders to determine the degree to which students are thriving in school is to examine students’ work regularly over the course of an academic year to see progress; maintain contact with students’ teachers to discuss students’ strengths and areas for improvement; and visit school regularly to see the kinds of programs—both academic and extracurricular–in which students participate on a daily basis. These authentic measures of student achievement and overall well-being are certainly more meaningful than unreliable numbers on a Department of Education spreadsheet.

The members of the Delran Education Association will continue to advocate for an evaluation system that is meaningful, consistent, fair, and aimed solely at improving teacher practice and student learning for the benefit of the children in our immediate care—instead of one designed primarily to rank and compare schools and teachers by oversimplifying the teaching and learning process and ignoring the many variables that influence student achievement and teacher effectiveness.

1 comment:

  1. This comment has been removed by a blog administrator.

    ReplyDelete