SchoolDigger Ratings of Mississippi Schools: A Brief Critique

Many of us may have seen the ratings website SchoolDigger with its ranking of schools in Mississippi based upon their Mississippi Academic Assessment Program (MAAP) scores.  You might also notice that these ratings based upon the same assessment give different rankings from the ones I may share on this blog.  Well, truthfully, no one, with the exception of myself, may have paid much attention to the difference.  However, I wanted to mention what it is SchoolDigger is showing and why, in my opinion, its methodology is not the best way to look at assessment results for our Mississippi schools or districts.

SchoolDigger is a great website in many ways.  It gives information on free/reduced lunch rates and student to teacher ratios, which are both valuable to know for our schools.  But, we need to understand what they are showing and using to determine their school rankings and ratings.  SchoolDigger primarily uses the “average standard score” to rank schools.  Basically, the “standard score” they are referring to is the MAAP assessment scores converted to a normal percent-style grade that most of us are familiar with from our days in the classroom.  This conversion of the score (scale score) to a percent (standard score) is not really the problem.  The problem comes into play when SchoolDigger then takes all of those scores for every student in the school taking that particular assessment that year and averages them together to give their “average standard score.”  The site then uses this “average standard score” to decide which school has performed more or less successfully.  However, there are two major issues with using this means to determine the success of any Mississippi school or school district on any particular MAAP test that make it a poor means of measurement.

First, the big problem is this is an “average.”  Averages are great for some things, but not for others.  Using the SchoolDigger methodology, you could have a hypothetical school with 100 students taking the test.  We will refer to this example test as “Test X.”  Of those 100 student who took “Test X” that year, 50 students made a “standard score” of 100.  The remaining 50 students all made a standard score of 60 (what we would typically consider a failing grade).  Now in this example, the “average standard score” for the school would be an 80, which sounds pretty good.  So in this hypothetical, half of the kids in the class passed and half of the kids in the class failed, which most of us would agree would not be the making of a good, much less a great, result for the school.  But, the “average scale score” result shows an 80, which would be a “B” score in traditional terms.  You can see the issue here in that average scores do not really show how successful the school is in educating or attempting to educate all of their students.

Now keep in mind in the above hypothetical, the schools “average scale score” used for ranking is an 80.  Then, let’s go on to imagine that we are comparing them to another school which also tested 100 students on the same “Test X” during the same year.  At the second school, of the 100 students taking the test, 99 of the students made an 80 and one single student made a 79.  From a common sense standpoint, the second school was far more successful than the first with virtually every student having a “B” level of performance on the assessment!  However, the second school’s “average scale score” is only a 79.99, which puts them lower in the ranking than the first example school (where half of the class failed) with its 80 “average scale score.”  I think you can see from this example and many others that could occur, “average” scores are a very poor way to attempt to rank a school, if the goal is to educate every child.

Secondly, there is nothing remotely referencing “average scale score” or anything “average” in relation to student scores in the Mississippi Accountability Model.  The Mississippi model uses a variety of ways to get “points” for the school which are used to determine the school’s rating (A – F ratings).  Taking away graduation rate and a few other factors which have nothing to do with the MAAP or other standardized tests created by the state, there are only two things the state is concerned with:  growth and percent scoring in the top two levels on the test.  The top two levels of scores that a student could possibly make (Level 4 and Level 5) are basically considered as being “proficient or above” scores on the test.  These are the targets the state wants every student to reach.  Every goal that our schools have in regards to our testing results are geared toward getting all students to at least a Level 4.  On every single test our state administers for accountability, they only want to know did the student grow from their level the previous year and/or did they reach the target of Level 4 or 5.  “Average” scores have nothing to do with this and cannot be used to tell you what percent of students were able to reach the minimum of the Level 4, which is the target given by the state.  This without question makes “average scale score” or any type of “average” score of zero value in relation to our Mississippi Accountability Model on which all schools are judged.

Thus, the means which SchoolDigger is utilizing to rank our schools are in my opinion of little practical value, since they do not tell us the percentage of the class or group who actually are meeting the target level of performance.  The SchoolDigger “average scale score” is also of no value in relation to our Mississippi system of determining school accountability and grade levels.  While growth of the same student from one year to the next, would be the ideal means to make any sort of judgement as far as how much a student actually learned in a year and teaching effectiveness, this information is not available to the public in the released Mississippi test scores.  Without such growth data, the only accurate and relevant way to rank our assessment results for schools and districts is using the percent of students who were able to score a Level 4 or Level 5 on each of the assessments given in our schools by the state.  This is why you will never see an accountability report by the Mississippi Department of Education with any mention of “average” scores being used to demonstrate proof of success on our MAAP assessment scores by our schools or school districts.  The SchoolDigger data using “average scale score” is interesting to look over, but is of no real value for ranking success or failure of our Mississippi schools.

-Clint Stroupe

2016-2017 Mississippi Algebra I MAAP Results Ranked by School & by District

Due to a little more teasing apart of the data, it always takes me a bit longer to post the Algebra I rankings.  As usual, I have listed the results for the state in Algebra I by district/school and ranked them by percent scoring in the top two levels.  Using the percent in the top two levels seems to be the preferred method of determining the percent scoring a “Proficient or above” type score, which is the goal score range.

The following links will take you to the Mississippi Academic Assessment Program (MAAP) Algebra I results from the 2016-2017 school year for junior high & middle schools without a 9th grade, high schools and attendance centers with a 9th grade, and for the districts as a whole (if curious about the reasoning behind this splitting of school rankings see the “Caveats” below):

2016-17 MAAP Algebra I Rankings of Middle & Junior High Schools Without a 9th Grade

2016-17 MAAP Algebra I Rankings of Attendance Centers & High Schools With a 9th Grade

2016-17 MAAP Algebra I Rankings of Mississippi School Districts

As discussed in previous years, more caution should be used in examining these Algebra I results than with any others listed.  There are several extremely important differences in how the Algebra I assessment is given and reported that make it quite unique.

Caveats of the 2016-2017 MAAP Algebra I results:

Algebra I is unique in that students may take it during the middle school years (typically the 8th grade).  These middle school students who took Algebra I in 2016-2017, all took the end-of-course MAAP Algebra I assessment just as their high school counterparts did.  In many school districts across the state, the decision is made to allow students who have demonstrated advanced achievement in 7th grade mathematics to take Algebra I in the 8th grade in order to “get a jump” on the accumulation of high school credits.  This “jump” might pay off by freeing up the student to take more advanced electives, dual-credit/enrollment, or AP courses later in high school.  Why is this important when analyzing results reported by school?

  1. In a situation where a district has a separate elementary, junior high, or middle school which includes an 7th or 8th grade and has Algebra I testers, those results will show up under the elem/jr. high/middle school where they took it.  This has a two-fold effect.  First, the school with the junior high test takers will typically have extremely high test scores as the more advanced students are typically enrolled in the course (with some exceptional cases at schools where the total opposite might be taking place for strategic reasons with polar opposite results).  Second, the school where those students typically move on to the 9th grade (the “high school”) will typically now have extremely lower Algebra I scores on average due to the fact that the upper achieving students have already taken the course in the 8th grade at the elem/jr. high/middle school where they were the year before.  Thus, middle schools will typically have extremely higher scores in comparison to all other school types.  This is in reference to the results only and not in reference to where the student’s results will apply in terms of the school’s accountability model grade.
  2. In some school districts these extremes do not take place at all and results are not skewed due to the “split” between taking Algebra I in the middle school grades.  This occurs for three typical reasons.  First, some districts have a blanket policy that no student, regardless of achievement, will be able to take Algebra I before 9th grade.  Thus, in those schools all students’ scores will fall under the high school in which they enter the 9th grade.  The only exception to this is a few schools across the state that include the 9th grade in their middle school or have a middle school made only of 9th graders.  This 9th grade middle school scenario is extremely rare in Mississippi, but it does exist causing further skewing of results when attempting to compare schools head to head.  Second, there are a fair number of high schools which include 7th – 12th grades.  In these combined 7th – 12th high schools, no skewing takes place as all Algebra I test takers are reported under the one school name regardless of the grade they take the course.  Third, there are a minority of K-12 schools still left across the state.  These schools have the same situation as the 7th – 12th grade high schools listed previously, in that they will not have skewing of results as takes place in the “caveat #1” schools listed above.
  3. Thus, in an ideal situation, one might compare three categories of schools’ Algebra I results.  The first category being elem/jr. high/middle schools with students taking Algebra I in the 7th/8th grade.  The second category being high schools which receive students from those type of elem/jr. high/middle schools which allow Algebra I to be taken.  The third category being made up of K-12 attendance centers and 7th – 12th high schools whose scores reflect all of their Algebra I students regardless of grade level.
  4. In the real world, these categories must be taken into consideration when comparing schools (district comparisons are not affected because all students regardless of grade level taking Algebra I end up under the umbrella of the particular district’s results).  However, attempting to show these distinctions when examining statewide results is impossible without the state supplying information about each schools grade levels (and perhaps even their philosophy or rules regarding students taking Algebra I).  Since my rankings rely on publicly available data, I have to use my own judgement as to what category a school might fall under.

Due to these very important caveats, I have made my best attempt to show this distinction of results by making two categories for ranking schools.  The first category includes elementary, junior high, and high schools which do not have a 9th grade.  The second category includes K-12 attendance centers and all high schools that have a 9th grade (including both 7th-12th & 9th-12th high schools).  These categories are not perfect as some schools (such as those very rare 9th grade only schools) have to be lumped into one category or the other even though they are unique situations.  Also, some schools names may not reflect their actual grade levels (such as Nowhereville High School which despite its name is actually a K-12 attendance center) resulting in me accidentally placing them in an inappropriate category.  However, I feel the attempt must be made to show at least these two category distinctions or else the results would make little sense (with middle schools virtually dominating the top half of the rankings for the reasons listed above).

Despite the long-winded dissertation, I hope these results provide information which you find beneficial.  I feel pretty confident in the data at this point, but please let me know if you spot any errors.  My goal for making this information available in this particular format is to aid in improved instruction for all of our students.  I simply ask, if you make use of the data in this format, please pass along the word of where you obtained it.  To paraphrase Crash Davis from Bull Durham, I hope when you speak of me, you speak well.

Thanks,

Clint Stroupe

2016-17 Mississippi MAAP English II Rankings by District & School

It is once again time for our Mississippi Academic Assessment Program (MAAP) data release.  As usual, I have listed the results for the state in English II by district/school and ranked them by percent scoring in the top two levels.  Using the percent in the top two levels seems to be the preferred method of determining the percent scoring a “Proficient or above” type score, which is the goal score range.

I feel pretty confident in the data at this point, but please let me know if you spot any errors.  My goal for making this information available in this particular format is to aid in improved instruction for all of our students.  I simply ask, if you make use of the data in this format, please pass along the word of where you obtained it.  To paraphrase Crash Davis from Bull Durham, I hope when you speak of me, you speak well.

Simply click the link below to access the DISTRICT LEVEL English II ranking report:

2016-17 MAAP English II Rankings by District

Click the following link below to access the SCHOOL LEVEL English II ranking report:

2016-17 MAAP English II Rankings by School

Thanks,

Clint Stroupe

*These rankings are for informational purposes only.  Growth is far more valuable information on determining whether learning took place and to what degree rather than end-of-year scores only, which only tell us where students at a school “ended up” without knowledge of where they “began.”

2017 Mississippi MAAP 3-8 Math & Language Arts Rankings by District

It is once again time for our Mississippi Academic Assessment Program (MAAP) data release.  As usual, I have listed the results for the state in Language Arts and Mathematics for grades 3rd – 8th by school district and ranked them by percent scoring in the top two levels.  Using the percent in the top two levels seems to be the preferred method of determining the percent scoring a “Proficient or above” type score, which is the goal score range.

I feel pretty confident in the data at this point, but please let me know if you spot any errors.  My goal for making this information available in this particular format is to aid in improved instruction for all of our students.  I simply ask, if you make use of the data in this format, please pass along the word of where you obtained it.  To paraphrase Crash Davis from Bull Durham, I hope when you speak of me, you speak well.

Simply click the link below to access the ELA ranking report:

2016-17 MAAP Language Arts Rankings by District

Click the following link below to access the Mathematics ranking report:

2016-17 MAAP Mathematics Rankings by District

Thanks,

Clint Stroupe

*These rankings are for informational purposes only.  Growth is far more valuable information on determining whether learning took place and to what degree rather than end-of-year scores only, which only tell us where students at a school “ended up” without knowledge of where they “began.”

2017 Mississippi MAAP 3-8 Math & Language Arts Rankings by School

It is once again time for our Mississippi Academic Assessment Program (MAAP) data release.  As usual, I have listed the results for the state in Language Arts and Mathematics for grades 3rd – 8th by school and ranked them by percent scoring in the top two levels.  Using the percent in the top two levels seems to be the preferred method of determining the percent scoring a “Proficient or above” type score, which is the goal score range.

I feel pretty confident in the data at this point, but please let me know if you spot any errors.  My goal for making this information available in this particular format is to aid in improved instruction for all of our students.  I simply ask, if you make use of the data in this format, please pass along the word of where you obtained it.  To paraphrase Crash Davis from Bull Durham, I hope when you speak of me, you speak well.

Simply click the link below to access the ELA ranking report:

2016-17 MAAP Language Arts Rankings by School

Click the following link below to access the Mathematics ranking report:

2016-17 MAAP Mathematics Rankings by School

Thanks,

Clint Stroupe

*These rankings are for informational purposes only.  Growth is far more valuable information on determining whether learning took place and to what degree rather than end-of-year scores only, which only tell us where students at a school “ended up” without knowledge of where they “began.”

2015-2016 Mississippi Algebra I MAP Assessment Results Ranked by School & by District

The following are the Mississippi Assessment Program (MAP) Algebra I results from the 2015-2016 school year for junior high & middle schools without a 9th grade, high schools and attendance centers with a 9th grade, and for the districts as a whole:

2015-2016 Algebra I Rankings by Middle or Jr High School

2015-2016 Algebra I Rankings by Attendance Center or High School w 9th Grade

2015-2016 Algebra I Rankings by District

As discussed in previous years, more caution should be used in examining these Algebra I results than with any others listed.  There are several extremely important differences in how the Algebra I assessment is given and reported that make it quite unique.

Caveats of the 2015-2016 MAP Algebra I results:

Algebra I is unique in that students may take it during the middle school years (typically the 8th grade).  These middle school students who took Algebra I in 2015-2016, all took the end-of-course MAP Algebra I assessment just as their high school counterparts did.  In many school districts across the state, the decision is made to allow students who have demonstrated advanced achievement in 7th grade mathematics to take Algebra I in the 8th grade in order to “get a jump” on the accumulation of high school credits.  This “jump” might pay off by freeing up the student to take more advanced electives, dual-credit/enrollment, or AP courses later in high school.  Why is this important when analyzing results reported by school?

  1. In a situation where a district has a separate elementary, junior high, or middle school which includes an 7th or 8th grade and has Algebra I testers, those results will show up under the elem/jr. high/middle school where they took it.  This has a two-fold effect.  First, the school with the junior high test takers will typically have extremely high test scores as the more advanced students are typically enrolled in the course (with some exceptional cases at schools where the total opposite might be taking place for strategic reasons with polar opposite results).  Second, the school where those students typically move on to the 9th grade (the “high school”) will typically now have extremely lower Algebra I scores on average due to the fact that the upper achieving students have already taken the course in the 8th grade at the elem/jr. high/middle school where they were the year before.  Thus, middle schools will typically have extremely higher scores in comparison to all other school types.
  2. In some school districts these extremes do not take place at all and results are not skewed due to the “split” between taking Algebra I in the middle school grades.  This occurs for three typical reasons.  First, some districts have a blanket policy that no student, regardless of achievement, will be able to take Algebra I before 9th grade.  Thus, in those schools all students’ scores will fall under the high school in which they enter the 9th grade.  The only exception to this is a few schools across the state that include the 9th grade in their middle school or have a middle school made only of 9th graders.  This 9th grade middle school scenario is extremely rare in Mississippi, but it does exist causing further skewing of results when attempting to compare schools head to head.  Second, there are a fair number of high schools which include 7th – 12th grades.  In these combined 7th – 12th high schools, no skewing takes place as all Algebra I test takers are reported under the one school name regardless of the grade they take the course.  Third, there are a minority of K-12 schools still left across the state.  These schools have the same situation as the 7th – 12th grade high schools listed previously, in that they will not have skewing of results as takes place in the “caveat #1” schools listed above.
  3. In an ideal situation, one might compare three categories of schools’ Algebra I results.  The first category being elem/jr. high/middle schools with students taking Algebra I in the 7th/8th grade.  The second category being high schools which receive students from those type of elem/jr. high/middle schools which allow Algebra I to be taken.  The third category being made up of K-12 attendance centers and 7th – 12th high schools whose scores reflect all of their Algebra I students regardless of grade level.
  4. In the real world, these categories must be taken into consideration when comparing schools (district comparisons are not affected because all students regardless of grade level taking Algebra I end up under the umbrella of the particular district’s results).  However, attempting to show these distinctions when examining statewide results is impossible without the state supplying information about each schools grade levels (and perhaps even their philosophy or rules regarding students taking Algebra I).  Since my ranking rely on publicly available data, I have to use my own judgement as to what category a school might fall under.

Due to these very important caveats, I have made my best attempt to show this distinction of results by making two categories for ranking schools.  The first category includes elementary, junior high, and high schools which do not have a 9th grade.  The second category includes K-12 attendance centers and all high schools that have a 9th grade (including both 7th-12th & 9th-12th high schools).  These categories are not perfect as some schools (such as those very rare 9th grade only schools) have to be lumped into one category or the other even though they are unique situations.  Also, some schools names may not reflect their actual grade levels (such as Nowhereville High School which despite its name is actually a K-12 attendance center) resulting in me accidentally placing them in an inappropriate category.  However, I feel the attempt must be made to show at least these two category distinctions or else the results would make little sense (with middle schools virtually dominating the top half of the rankings for the reasons listed above).

If interested in comparing to last year’s PARCC Algebra I assessments, you can view them by clicking on the following link:

2014-2015 Rankings by Mississippi School – PARCC Algebra I Assessment

Despite the long-winded dissertation, I hope these results provide information which you find beneficial.

Thanks,

Clint Stroupe

2015-2016 Mississippi English II MAP Assessment Results Ranked by School

In the same vein as the other assessment results, the following are the Mississippi Assessment Program (MAP) English II results from the 2015-2016 school year ranked by individual school.  Unlike the Algebra I test results, English II results are straight-forward in they only apply to one school since the course and the end-of-course assessment are only taken on the high school level in contrast to Algebra I which can be taken at the middle school level.

The MAP results ranked by school can be accessed by clicking on the following link:

2015-2016 English II Rankings by School

The results from last year’s PARCC English II assessments ranked in the same manner can be viewed by clicking below:

2014-2015 Rankings by MS School – PARCC Eng II Assessment

As with all MAP assessments given for the first time in the 2015-2016 school year, there is no way to accurately determine growth with these student’s having previously taken the MCT2 in the 8th grade.  Thus, the only previous ELA test data was from two years prior on a completely different assessment and fell on the year in which schools were teaching the CCSS while still taking the tests designed for the old curriculum framework.  All that is to simply point out that determining growth between the 8th grade MCT2 scores from a waiver year and the first year MAP English II assessments would be of dubious value.  The data is purely for informational purposes, and I hope those interested find it useful.  As always, please let me know if you spot any issues.

Thanks,

Clint Stroupe

2016 Mississippi MAP 3-8 Math & Language Arts Rankings by School

I have listed the Mississippi Assessment Program (MAP) results for the state in Language Arts and Mathematics for grades 3rd – 8th by school and ranked them by percent scoring in the top two levels.  Using the percent in the top two levels seems to be the preferred method of determining the percent scoring a “Proficient-type” score, which is the goal score range.  This is almost identical to the ranking by school that I posted last year for the PARCC assessments in grades 3rd – 8th.  I feel pretty confident in the data at this point, but please let me know if you spot any errors.

Simply click the link below to access the ranking report:

2015-2016 MAP Rankings by School

Last year’s PARCC assessment results using the same ranking system are available by clicking on the following:

2015 Mississippi PARCC Rankings

Thanks,

Clint Stroupe

*These rankings are for informational purposes only.  True growth information is not available due to the fact this was the first time the MAP assessments were given.  Growth is far more valuable information on determining whether learning took place and to what degree rather than end-of-year scores only, which only tell us where students at a school “ended up” without knowledge of where they “began.”  The state has attempted to equate the 2014-2015 Mississippi PARCC assessment scores with the 2015-2016 MAP assessment scores in order to determine growth for accountability model purposes.  However, the accuracy of such a comparison with only one year’s worth of data on either assessment is questionable to say the least.

Fall 2015-Spring 2016 Mississippi Kindergarten Readiness Growth Rankings by School

With the recent release of the 2015-2016 Mississippi Kindergarten Readiness Assessment results, I thought some might be interested in examining the statewide growth rankings by individual school for last year’s assessments to measure Kindergarten readiness given at the beginning and the end of the Kindergarten year.  The data is still very fresh, so please do let me know if you spot anything that does not look right.  This information can be viewed by clicking on the following link:

2015-2016 Rankings by Growth MS Kindergarten Readiness Assessment by School

This information is nice in that it gives us “growth” information to attempt to see what degree of learning might have occurred over the school year.  This is in contrast to the far less desirable end-of-course or end-of-year scores which give us no indicator of what level of achievement the students were at when they actually began the course.

However, there are several problems with attempting to draw too many conclusions from these rankings or the amount of growth used to determine these rankings.  The 2015-2016 Mississippi Kindergarten Readiness Assessment had several characteristics which absolutely need to be considered when looking at this growth and drawing any conclusions from it:

  • This assessment is in every way I can examine the STAR Early Literacy assessment produced by Renaissance Learning which has the contract to produce this test for Mississippi.  Therefore, the test is not designed to assess students who are already quite literate by the end of the year.  Yet, in many of our schools a small number of Kindergarten students are often moved up from STAR Early Literacy to the more advanced STAR Reading assessment to determine growth because of their high Early Literacy scores.  This is important to note because high-performing students might literally “top out” on this readiness assessment and not show significant growth by the end of the year as they hit the “ceiling” of this assessment’s design.  Several students hitting such a ceiling would adversely affect “growth” since little or none could be detected in these students who have exceeded the design of the assessment.  This is important to remember, especially in schools that show higher levels of average beginning and ending scores.  This can be somewhat illustrated by the graph below:GraphThe graph shows a comparison of the average beginning 2015 Fall score for each school (x-axis) compared to the average gains in scale score after the 2016 Spring assessment (y-axis).  The correlation is not very strong overall, but you can visibly see the negative trend that with higher average beginning Fall scale scores the likelihood of being in the top rankings of growth after the ending Spring assessment goes trends downward.  As stated this negative correlation is not very strong overall, but it is a correlation.  More importantly, notice those schools on the upper end (545 and above average Fall score), none of those schools managed to go above 215 points of average growth (scale score gains).  I am not a statistician, so this is far from scientific.  However, I think it points to the strong possibility of the “ceiling” effect to which I am referring and should be kept in mind when examining growth at the individual schools and districts.
  • Along similar lines, the information presented here is for raw scale score growth.  It does not tell us how close the students were on average to hitting the appropriate growth “target” based upon their individual beginning score.  This is important because students (on average) achieve very different magnitudes of scale score growth depending upon their beginning score.  The same assessments given over multiple years and/or the same assessments given to large numbers of students across the country can allow such “growth targets” to be determined giving students and teachers an average amount of growth which would be statistically “typical” for the student to achieve.  In fact, I would assume this data should be available given the STAR Early Literacy assessment is given all over the nation.  For example, hypothetically students beginning the year with a score of 500 might “on average” grow to a score of 674 by the end of the year (174 point growth in raw score).  Alternatively, a student beginning the year at a 674 might “on average” grow to a score of only 710 (36 point growth in raw score).  In this hypothetical situation, a classroom of students who all began the year with a 674 and ended the year with a 725 (49 points of raw score growth) would have “done better” than a classroom of students who all began the year with a 500 and ended the year all scoring a 600 (100 points of raw score growth).  Those in the class beginning at 674 scored far more growth than what would be typical for their peers compared to those in the class beginning with a 500 whose growth was not as high as their peers, even though in terms of pure score growth (such as the data contained in this ranking) the 674 class did not have as much pure score growth!  Instead of magnitude of score growth, this comparison to an average “growth target,” based upon the beginning of year score would be a much better indicator of learning progress.  Such analysis and comparison to what is typical is important to factor into account before drawing too many conclusions about one school outperforming another in terms of growth, especially if those schools had very different average Fall (beginning of year) scores.

All of these factors should certainly be considered for the 2015-2016 growth results.  If the assessment results are truly going to be used to compare schools and districts head to head in regards to “growth” then hopefully the “top end ceiling” issue of this Star Early Literacy assessment will be addressed.  That and/or an analysis giving typical growth for students with the same beginning score and a formula to weigh the growth “percentage” achieved based on these beginning scores is the only way this head to head comparison of growth is in any way accurate.

Thanks,

Clint Stroupe

*The scatter plot graphic shown above can be downloaded for a better view by clicking the link below.  Please feel free to critique my rudimentary knowledge of correlations and the like.  Hopefully, I did not butcher it too badly.

Fall Scores vs Growth Scatter Plot

The Need for Stable, Growth-Based Accountability

“Privileged groups work for greater power consolidation through favoritism.”
― Bryant McGill, Voice of Reason

School accountability models that have unreachable goals and are not growth based have one purpose, to confuse the public and the schools themselves. They yield negative results for schools that are not truly reflective of student learning nor giving meaningful information to anyone. Such models serve no purpose other than a political one to make school systems appear to under perform to achieve political goals, regardless of what is truly occurring in a school.

Likewise, when states change their models every year as well as the assessments used in these models, the effort is a meaningless waste of time and funds, lacking any meaningful results. A state would be better off without an accountability system rather than one which is constantly changing as both scenarios produce no accurate data to be used in meaningful ways. At least the absence of any accountability system whatsoever does not waste tax dollars on tests given without a real purpose and instructional time wasted on such testing.

Growth-based, objective assessments of student performance, achievable accountability models that incorporate such growth, and systems of accountability which are stable over multiple years are the only meaningful types of statewide accountability. A state cannot afford not to have such a quality system in place that is the same for both public and charter schools. Yet, no system at all would be preferable to one which lacks these essential elements.

When accountability models have no meaning due to their lack or consistency or achievability, we return to a time period where the public is largely ignorant of which schools are actually producing growth in students. We also return to a time that a few school systems were incredibly lucky enough to have honorable and intelligent administrators and teachers willing to face up to political pressure make decisions based upon optimal learning of students. However, for the many school systems, no accountability, to one degree or another, returns to the days where the school’s main goal was not to attract any attention, to keep the “right” parents in the community happy, to keep property taxes low regardless of need, and to make sure it provided jobs and promotions for the well-connected of the community instead of those who produced the most gains for the student.

Some educators would like to go back to the “good old days” prior to any testing or accountability. Yet those old days were a world where the best schools, the best teachers, and the best administrators were largely decided upon for subjective reasons such as their likability to those above them and the perception of those around them regardless of facts. Even with evaluations based upon observation every educator knows another teacher or is a teacher themselves who is able to “put on the dog and pony show” for an administrator’s view often at the drop of a hat. Many say they detest politics and popularity contests in our schools today, but without objective accountability measures for many in our school systems such subjectivity will be the only means of criteria left for making decisions. I am for accountability models and their assessments as necessities, but only for those designed to actually work. Without it there is absolutely no pressure on anyone to put people into positions based on their ability to produce as opposed to simply the whim of those making such decisions and the influence of others upon them.

-Clint Stroupe