Fall 2015-Spring 2016 Mississippi Kindergarten Readiness Growth Rankings by School

With the recent release of the 2015-2016 Mississippi Kindergarten Readiness Assessment results, I thought some might be interested in examining the statewide growth rankings by individual school for last year’s assessments to measure Kindergarten readiness given at the beginning and the end of the Kindergarten year.  The data is still very fresh, so please do let me know if you spot anything that does not look right.  This information can be viewed by clicking on the following link:

2015-2016 Rankings by Growth MS Kindergarten Readiness Assessment by School

This information is nice in that it gives us “growth” information to attempt to see what degree of learning might have occurred over the school year.  This is in contrast to the far less desirable end-of-course or end-of-year scores which give us no indicator of what level of achievement the students were at when they actually began the course.

However, there are several problems with attempting to draw too many conclusions from these rankings or the amount of growth used to determine these rankings.  The 2015-2016 Mississippi Kindergarten Readiness Assessment had several characteristics which absolutely need to be considered when looking at this growth and drawing any conclusions from it:

  • This assessment is in every way I can examine the STAR Early Literacy assessment produced by Renaissance Learning which has the contract to produce this test for Mississippi.  Therefore, the test is not designed to assess students who are already quite literate by the end of the year.  Yet, in many of our schools a small number of Kindergarten students are often moved up from STAR Early Literacy to the more advanced STAR Reading assessment to determine growth because of their high Early Literacy scores.  This is important to note because high-performing students might literally “top out” on this readiness assessment and not show significant growth by the end of the year as they hit the “ceiling” of this assessment’s design.  Several students hitting such a ceiling would adversely affect “growth” since little or none could be detected in these students who have exceeded the design of the assessment.  This is important to remember, especially in schools that show higher levels of average beginning and ending scores.  This can be somewhat illustrated by the graph below:GraphThe graph shows a comparison of the average beginning 2015 Fall score for each school (x-axis) compared to the average gains in scale score after the 2016 Spring assessment (y-axis).  The correlation is not very strong overall, but you can visibly see the negative trend that with higher average beginning Fall scale scores the likelihood of being in the top rankings of growth after the ending Spring assessment goes trends downward.  As stated this negative correlation is not very strong overall, but it is a correlation.  More importantly, notice those schools on the upper end (545 and above average Fall score), none of those schools managed to go above 215 points of average growth (scale score gains).  I am not a statistician, so this is far from scientific.  However, I think it points to the strong possibility of the “ceiling” effect to which I am referring and should be kept in mind when examining growth at the individual schools and districts.
  • Along similar lines, the information presented here is for raw scale score growth.  It does not tell us how close the students were on average to hitting the appropriate growth “target” based upon their individual beginning score.  This is important because students (on average) achieve very different magnitudes of scale score growth depending upon their beginning score.  The same assessments given over multiple years and/or the same assessments given to large numbers of students across the country can allow such “growth targets” to be determined giving students and teachers an average amount of growth which would be statistically “typical” for the student to achieve.  In fact, I would assume this data should be available given the STAR Early Literacy assessment is given all over the nation.  For example, hypothetically students beginning the year with a score of 500 might “on average” grow to a score of 674 by the end of the year (174 point growth in raw score).  Alternatively, a student beginning the year at a 674 might “on average” grow to a score of only 710 (36 point growth in raw score).  In this hypothetical situation, a classroom of students who all began the year with a 674 and ended the year with a 725 (49 points of raw score growth) would have “done better” than a classroom of students who all began the year with a 500 and ended the year all scoring a 600 (100 points of raw score growth).  Those in the class beginning at 674 scored far more growth than what would be typical for their peers compared to those in the class beginning with a 500 whose growth was not as high as their peers, even though in terms of pure score growth (such as the data contained in this ranking) the 674 class did not have as much pure score growth!  Instead of magnitude of score growth, this comparison to an average “growth target,” based upon the beginning of year score would be a much better indicator of learning progress.  Such analysis and comparison to what is typical is important to factor into account before drawing too many conclusions about one school outperforming another in terms of growth, especially if those schools had very different average Fall (beginning of year) scores.

All of these factors should certainly be considered for the 2015-2016 growth results.  If the assessment results are truly going to be used to compare schools and districts head to head in regards to “growth” then hopefully the “top end ceiling” issue of this Star Early Literacy assessment will be addressed.  That and/or an analysis giving typical growth for students with the same beginning score and a formula to weigh the growth “percentage” achieved based on these beginning scores is the only way this head to head comparison of growth is in any way accurate.


Clint Stroupe

*The scatter plot graphic shown above can be downloaded for a better view by clicking the link below.  Please feel free to critique my rudimentary knowledge of correlations and the like.  Hopefully, I did not butcher it too badly.

Fall Scores vs Growth Scatter Plot


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s