In the same vein as the other assessment results, the following are the PARCC English II assessment results from the 2014-2015 school year ranked by individual school. Unlike the Algebra I test results, English II results are straight-forward in they only apply to one school since the course and the end-of-course assessment are only taken on the high school level in contrast to Algebra I which can be taken at the middle school level.
The results ranked by school can be accessed by clicking on the following link:
2014-2015 Rankings by MS School – PARCC Eng II Assessment
As with all PARCC assessments given for the first (and only) time in the 2014-2015 school year, there is no way to determine growth. Therefore, we know nothing about the level of achievement students started the course exhibiting and only know where they ended. Since such growth data is unavailable, the only data we have are the final end-of-course scores. The data is purely for informational purposes, and I hope those interested find it useful. As always, please let me know if you spot any issues.
Judging from the interest in the other assessments, I thought perhaps someone might also like to look at the secondary level PARCC end-of-course assessment results for Algebra I. They can be accessed by clicking the following link:
2014-2015 Rankings by MS School – PARCC Alg I Assessment
However, more caution should be used in examining these Algebra I PARCC results than with any others listed. There are several extremely important differences in how the Algebra I assessment is given and reported that make it quite unique.
Caveats of the 2014-2015 PARCC Algebra I results:
Algebra I is unique in that students may take it during the middle school years (typically the 8th grade). These middle school students who took Algebra I in 2014-2015, all took the end-of-course PARCC Algebra I assessment just as their high school counterparts did. In many school districts across the state, the decision is made to allow students who have demonstrated advanced achievement in 7th grade mathematics to take Algebra I in the 8th grade in order to “get a jump” on the accumulation of high school credits. This “jump” might pay off by freeing up the student to take more advanced electives, dual-credit/enrollment, or AP courses later in high school. Why is this important when analyzing results reported by school?
- In a situation where a district has a separate elementary, junior high, or middle school which includes an 7th or 8th grade and has Algebra I testers, those results will show up under the elem/jr. high/middle school where they took it. This has a two-fold effect. First, the school with the junior high test takers will typically have extremely high test scores as the more advanced students are typically enrolled in the course (with some exceptional cases at schools where the total opposite might be taking place for strategic reasons with polar opposite results). Second, the school where those students typically move on to the 9th grade (the “high school”) will typically now have extremely lower Algebra I scores on average due to the fact that the upper achieving students have already taken the course in the 8th grade at the elem/jr. high/middle school where they were the year before. Thus, middle schools will typically have extremely higher scores in comparison to all other school types.
- In some school districts these extremes do not take place at all and results are not skewed due to the “split” between taking Algebra I in the middle school grades. This occurs for three typical reasons. First, some districts have a blanket policy that no student, regardless of achievement, will be able to take Algebra I before 9th grade. Thus, in those schools all students’ scores will fall under the high school in which they enter the 9th grade. The only exception to this is a few schools across the state that include the 9th grade in their middle school or have a middle school made only of 9th graders. This 9th grade middle school scenario is extremely rare in Mississippi, but it does exist causing further skewing of results when attempting to compare schools head to head. Second, there are a fair number of high schools which include 7th – 12th grades. In these combined 7th – 12th high schools, no skewing takes place as all Algebra I test takers are reported under the one school name regardless of the grade they take the course. Third, there are a minority of K-12 schools still left across the state. These schools have the same situation as the 7th – 12th grade high schools listed previously, in that they will not have skewing of results as takes place in the “caveat #1” schools listed above.
- In an ideal situation, one might compare three categories of schools’ Algebra I results. The first category being elem/jr. high/middle schools with students taking Algebra I in the 7th/8th grade. The second category being high schools which receive students from those type of elem/jr. high/middle schools which allow Algebra I to be taken. The third category being made up of K-12 attendance centers and 7th – 12th high schools whose scores reflect all of their Algebra I students regardless of grade level.
- In the real world, these categories must be taken into consideration when comparing schools (district comparisons are not affected because all students regardless of grade level taking Algebra I end up under the umbrella of the particular district’s results). However, attempting to show these distinctions when examining statewide results is impossible without the state supplying information about each schools grade levels (and perhaps even their philosophy or rules regarding students taking Algebra I). Since my ranking rely on publicly available data, I have to use my own judgement as to what category a school might fall under.
Due to these very important caveats, I have made my best attempt to show this distinction of results by making two categories for ranking schools. The first category includes K-12 attendance centers and all high schools that have a 9th grade (including both 7th-12th & 9th-12th high schools). The second category includes elementary, junior high, and high schools which do not have a 9th grade. These categories are not perfect as some schools (such as those very rare 9th grade only schools) have to be lumped into one category or the other even though they are unique situations. Also, some schools names may not reflect their actual grade levels (such as Nowhereville High School which despite its name is actually a K-12 attendance center) resulting in me accidentally placing them in an inappropriate category. However, I feel the attempt must be made to show at least these two category distinctions or else the results would make little sense (with middle schools virtually dominating the top half of the rankings for the reasons listed above).
Despite the long-winded dissertation, I hope these results provide information which you find beneficial.
I thought some might be interested in examining the statewide growth rankings by individual school for last year’s assessments to measure Kindergarten readiness given at the beginning and the end of the Kindergarten year. This information can be viewed by clicking on the following link:
2014-2015 Rankings by Growth MS Kindergarten Readiness Assessment by School
This information is nice in that it gives us “growth” information to attempt to see what degree of learning might have occurred over the school year. This is in contrast to the far less desirable end-of-course or end-of-year scores which give us no indicator of what level of achievement the students were at when they actually began the course.
However, there are several problems with attempting to draw too many conclusions from these rankings or the amount of growth used to determine these rankings. The 2014-2015 Mississippi Kindergarten readiness assessments had several characteristics which absolutely need to be considered when looking at this growth and drawing any conclusions from it:
- This was the first year these assessments were given. During the first year there can be many unforeseen problems (such as technology limitations/problems giving the assessments at particular schools, lack of prior experience using technology by the students at a particular school which might affect scores from reflecting actual student knowledge, etc.). These unforeseen circumstances tell us nothing of the students’ learning in reading, but can skew results. Usually, after a new testing program is started such “first year hiccups” can be alleviated in the following years as districts and schools make adjustments to compensate for such occurrences.
- This assessment is very, very similar to the STAR Early Literacy test produced by Renaissance Learning which has the contract to produce this test for Mississippi. Therefore, the test is not designed to assess students who are already quite literate by the end of the year. Yet, in many of our schools a small number of Kindergarten students are often moved up from STAR Early Literacy to the more advanced, STAR Reading assessments to determine growth because of their high Early Literacy scores. This is important to note because high-performing students might literally “top out” on this readiness assessment and not show significant growth by the end of the year as they hit the “ceiling” of this assessment’s design. Several students hitting such a ceiling would adversely affect “growth” since little or none could be detected in these students who have exceeded the design of the assessment. This is important to remember, especially in schools that show higher levels of average beginning and ending scores.
- The information presented here is for raw scale score growth. It does not tell us how close the students were on average to hitting the appropriate growth “target” based upon their individual beginning score. This is important because students (on average) achieve very different magnitudes of raw or scale score growth depending upon their beginning score. The same assessments given over multiple years and/or the same assessments given to large numbers of students across the country can allow such “growth targets” to be determined giving students and teachers an average amount of growth which would be statistically “typical” for the student to achieve based upon the initial beginning of year score by the student. For example, hypothetically students beginning the year with a score of 500 might “on average” grow to a score of 674 by the end of the year (174 point growth in raw score). Alternatively, a student beginning the year at a 674 might “on average” grow to a score of only 710 (36 point growth in raw score). In this hypothetical situation, a classroom of students who all began the year with a 674 and ended the year with a 725 (49 points of raw score growth) would have “done better” than a classroom of students who all began the year with a 500 and ended the year all scoring a 600 (100 points of raw score growth). Those in the class beginning at 674 scored far more growth than what would be typical for their peers compared to those in the class beginning with a 500 whose growth was not as high as their peers, even though in terms of pure score growth (such as the data contained in this ranking) the 674 class did not have as much pure score growth! Instead of magnitude of score growth, this comparison to an average “growth target,” based upon the beginning of year score would be a much better indicator of learning progress. Such analysis and comparison to what is typical is important to factor into account before drawing too many conclusions about one school outperforming another in terms of growth, especially if those schools had very different average Fall (beginning of year) scores.
All of these factors should certainly be considered for the 2014-2015 growth results. As the state, hopefully, sticks with the same or very similar assessments over the next years more data could allow some of these issues to be addressed and perhaps some changes might be made to address the issue of this “top end ceiling,” if growth of all students is truly the “target.”
I have compiled the PARCC results for Mississippi in Language Arts and Mathematics for grades 3rd – 8th by school and ranked them by percent scoring in the top two levels. Using the percent in the top two levels seems to be the preferred method of determining the percent scoring a “Proficient-type” score, which is the goal score range. I thought some might be interested in having the assessment results broken down by school in this fashion. I feel pretty confident in the data at this point, but please let me know if you spot any errors.
Simply click the link below to access the ranking report:
2015 Mississippi PARCC Rankings
*These rankings are for informational purposes only. Growth information is not available due to the fact this is the first and only time the PARCC assessments will be given in Mississippi. Growth is by far more valuable information on determining whether learning took place and to what degree rather than end-of-year scores only which only tell us where students at a school “ended up” without knowledge of where they “began.” However, such growth information is not and will not be available for the 2014-2015 Mississippi PARCC assessments.