TAKE ACTION

How NYSED Should Count Opt-Outs in the New State Accountability System under ESSA

To: Commissioner MaryEllen Elia and the NY Board of Regents
From: Class Size Matters & NY State Allies for Public Education
Date: February 15
, 2017
  
As of now, NYSED, when calculating a school’s academic achievement in its ESSA accountability system, appears intent on assigning a “0” or “1” in proficiency to every student who opts out of the state exams. This would wrongly identify schools with high opt out rates as low-performing, no matter their actual average level of student achievement.  If this occurs, many, if not most, of the schools the accountability system identifies as struggling and in need of comprehensive academic support and improvement will likely be those schools on Long Island and elsewhere where more than 50% of the students opted out, but where achievement is relatively high.
 
As an illustration, consider a school that has one hundred students who are eligible to take the NYS tests. Of those one hundred students, seventy refuse the tests. Of the thirty who took the tests, twenty-five scored proficient. In other words, approximately 83% (25/30) of students who took the tests were deemed proficient. However, ESSA requires that when doing this calculation, the denominator must be either the number of students who took the test (30 in this example) or 95% of all students who were eligible to take the test (95 in the example), whichever number is greater. If students who refused the test are considered as “0”s or “1”s as NYSED has been proposing, and must be counted in the denominator as ESSA requires, the school’s proficiency rate for this academic indicator drops from 83% (25/30) to approximately 26% (25/95). Consequently, the school may be erroneously identified as being in the lowest 5-10% of schools identified for support and improvement, with the result that valuable resources would be diverted from other schools in need.
 
To address this issue, a closer look at ESSA is in order.  Here is Section 1111(c)(4)(E): 
 
ANNUAL MEASUREMENT OF ACHIEVEMENT. –…
(ii) For the purpose of measuring, calculating, and reporting on the indicator described in subparagraph (B)(i), include in the denominator the greater of–
(I) 95 percent of all such students, or 95 percent of all such students in the subgroup, as the case may be;
or
(II) the number of students participating in the assessments.
 
Yet while the denominator is specified, nothing is mentioned in the law or the regulations pertaining to what should be entered into the numerator for these students.   
 
A reasonable solution that would prevent the state’s accountability system from falsely identifying schools with test refusals as automatically in need of comprehensive or targeted support would be to calculate the average proficiency level of the other students at the school who were tested, or the average proficiency level of the subgroups to which they belong, and insert that value into the numerator for those students who opted out. That way, the scores recorded for a subgroup or a school as a whole reflect the actual performance on the exams by students who actually took the exam.
 
Otherwise, the scores used are inaccurate since they arbitrarily assume low performance for students who did not take the test
There are additional reasons to adopt the methodology described above:
1 –  ESSA clearly states that the US Secretary of Education is prohibited from telling a state how school participation rates must be factored into its accountability system. See Section 1111(e)(1)(B)(iii)(XI)).: 
 As David P. Cleary, Sen. Lamar Alexander’s chief of staff explained, “The Secretary, in section 1111(e)(1)(B)(iii)(XI) of the new law, is prohibited from telling a state how that requirement must be factored into the state accountability system. This means that the Secretary cannot say a school must drop a rating in the state accountability system, or be forced to enter into some kind of school improvement, or dictate any solution from Washington. How the 95 percent requirement factors into the state accountability system and the consequences for a school that fails to meet the 95 percent requirement are state decisions… A state could choose a variety of outcomes—including that test participation has no impact on a school’s identification…”

 2- ESSA specifically authorizes states to allow parents to opttheir children out of exams, with no fear of consequences, in section 1111(b)(2)(K): “RULE OF CONSTRUCTION ON PARENT RIGHTS. —
“Nothing in this paragraph shall be construed as preempting a State or local law regarding the decision of a parent to not have the parent’s child participate in the academic assessments under this paragraph.

3- The House of Representatives has approved HJ Resolution 57, and the Senate is likely to follow suit soon.  This Resolution essentially nullifies the overly prescriptive ESSA accountability regulations promulgated by the US Department of Education under former Secretary King, and preclude the agency from creating new regulations on this issue unless Congress approves legislation authorizing this.  Thus states will be able act on their own, with more flexibility in devising their accountability systems –and without the risk of being overruled by the US Department of Education.

4. In any case, wrongly identifying schools with low participation as the lowest performing schools in the state violates the spirit of the law, which is to provide support and improvement to those schools that are truly struggling, rather than try to use the law as a hammer to force parents to make their children take the state exams. It would also violate thepublic position of Chancellor Rosa, who has stated that the law recognizes that parents have the right to opt their children out of these exams, without fear of penalties. Arbitrarily counting opt-out students as having failed the state tests in a school’s accountability formula would contradict this.  

Finally, it is our view that test-based accountability systems lack validity, and that the state tests are flawed and yield little useful information. Even if the tests were improved to produce reliable data, that data is misused in the state’s accountability system as well as in the receivership law. Since the Every Student Succeeds Act (ESSA) persists in mandating annual grade 3-8 testing and that these scores be used in the state accountability system, the above proposal is an interim solution
 for how the state can act to minimize the harm done by the misuse of test score data. However, to truly serve students, we believe that high-stakes testing should be eliminated. 

Scroll to Top