Mississippi Legislature Education Committee Bills – Alive & Dead – 1/31/2017

As of January 31, 2017,

Mississippi House Education Committee Bills – Dead in Committee

Mississippi House Education Committee Bills – Still Alive

Mississippi Senate Education Committee Bills – Dead in Committee

Mississippi Senate Education Committee Bills – Still Alive

 

 

2015-2016 Mississippi Algebra I MAP Assessment Results Ranked by School & by District

The following are the Mississippi Assessment Program (MAP) Algebra I results from the 2015-2016 school year for junior high & middle schools without a 9th grade, high schools and attendance centers with a 9th grade, and for the districts as a whole:

2015-2016 Algebra I Rankings by Middle or Jr High School

2015-2016 Algebra I Rankings by Attendance Center or High School w 9th Grade

2015-2016 Algebra I Rankings by District

As discussed in previous years, more caution should be used in examining these Algebra I results than with any others listed.  There are several extremely important differences in how the Algebra I assessment is given and reported that make it quite unique.

Caveats of the 2015-2016 MAP Algebra I results:

Algebra I is unique in that students may take it during the middle school years (typically the 8th grade).  These middle school students who took Algebra I in 2015-2016, all took the end-of-course MAP Algebra I assessment just as their high school counterparts did.  In many school districts across the state, the decision is made to allow students who have demonstrated advanced achievement in 7th grade mathematics to take Algebra I in the 8th grade in order to “get a jump” on the accumulation of high school credits.  This “jump” might pay off by freeing up the student to take more advanced electives, dual-credit/enrollment, or AP courses later in high school.  Why is this important when analyzing results reported by school?

  1. In a situation where a district has a separate elementary, junior high, or middle school which includes an 7th or 8th grade and has Algebra I testers, those results will show up under the elem/jr. high/middle school where they took it.  This has a two-fold effect.  First, the school with the junior high test takers will typically have extremely high test scores as the more advanced students are typically enrolled in the course (with some exceptional cases at schools where the total opposite might be taking place for strategic reasons with polar opposite results).  Second, the school where those students typically move on to the 9th grade (the “high school”) will typically now have extremely lower Algebra I scores on average due to the fact that the upper achieving students have already taken the course in the 8th grade at the elem/jr. high/middle school where they were the year before.  Thus, middle schools will typically have extremely higher scores in comparison to all other school types.
  2. In some school districts these extremes do not take place at all and results are not skewed due to the “split” between taking Algebra I in the middle school grades.  This occurs for three typical reasons.  First, some districts have a blanket policy that no student, regardless of achievement, will be able to take Algebra I before 9th grade.  Thus, in those schools all students’ scores will fall under the high school in which they enter the 9th grade.  The only exception to this is a few schools across the state that include the 9th grade in their middle school or have a middle school made only of 9th graders.  This 9th grade middle school scenario is extremely rare in Mississippi, but it does exist causing further skewing of results when attempting to compare schools head to head.  Second, there are a fair number of high schools which include 7th – 12th grades.  In these combined 7th – 12th high schools, no skewing takes place as all Algebra I test takers are reported under the one school name regardless of the grade they take the course.  Third, there are a minority of K-12 schools still left across the state.  These schools have the same situation as the 7th – 12th grade high schools listed previously, in that they will not have skewing of results as takes place in the “caveat #1” schools listed above.
  3. In an ideal situation, one might compare three categories of schools’ Algebra I results.  The first category being elem/jr. high/middle schools with students taking Algebra I in the 7th/8th grade.  The second category being high schools which receive students from those type of elem/jr. high/middle schools which allow Algebra I to be taken.  The third category being made up of K-12 attendance centers and 7th – 12th high schools whose scores reflect all of their Algebra I students regardless of grade level.
  4. In the real world, these categories must be taken into consideration when comparing schools (district comparisons are not affected because all students regardless of grade level taking Algebra I end up under the umbrella of the particular district’s results).  However, attempting to show these distinctions when examining statewide results is impossible without the state supplying information about each schools grade levels (and perhaps even their philosophy or rules regarding students taking Algebra I).  Since my ranking rely on publicly available data, I have to use my own judgement as to what category a school might fall under.

Due to these very important caveats, I have made my best attempt to show this distinction of results by making two categories for ranking schools.  The first category includes elementary, junior high, and high schools which do not have a 9th grade.  The second category includes K-12 attendance centers and all high schools that have a 9th grade (including both 7th-12th & 9th-12th high schools).  These categories are not perfect as some schools (such as those very rare 9th grade only schools) have to be lumped into one category or the other even though they are unique situations.  Also, some schools names may not reflect their actual grade levels (such as Nowhereville High School which despite its name is actually a K-12 attendance center) resulting in me accidentally placing them in an inappropriate category.  However, I feel the attempt must be made to show at least these two category distinctions or else the results would make little sense (with middle schools virtually dominating the top half of the rankings for the reasons listed above).

If interested in comparing to last year’s PARCC Algebra I assessments, you can view them by clicking on the following link:

2014-2015 Rankings by Mississippi School – PARCC Algebra I Assessment

Despite the long-winded dissertation, I hope these results provide information which you find beneficial.

Thanks,

Clint Stroupe

2015-2016 Mississippi English II MAP Assessment Results Ranked by School

In the same vein as the other assessment results, the following are the Mississippi Assessment Program (MAP) English II results from the 2015-2016 school year ranked by individual school.  Unlike the Algebra I test results, English II results are straight-forward in they only apply to one school since the course and the end-of-course assessment are only taken on the high school level in contrast to Algebra I which can be taken at the middle school level.

The MAP results ranked by school can be accessed by clicking on the following link:

2015-2016 English II Rankings by School

The results from last year’s PARCC English II assessments ranked in the same manner can be viewed by clicking below:

2014-2015 Rankings by MS School – PARCC Eng II Assessment

As with all MAP assessments given for the first time in the 2015-2016 school year, there is no way to accurately determine growth with these student’s having previously taken the MCT2 in the 8th grade.  Thus, the only previous ELA test data was from two years prior on a completely different assessment and fell on the year in which schools were teaching the CCSS while still taking the tests designed for the old curriculum framework.  All that is to simply point out that determining growth between the 8th grade MCT2 scores from a waiver year and the first year MAP English II assessments would be of dubious value.  The data is purely for informational purposes, and I hope those interested find it useful.  As always, please let me know if you spot any issues.

Thanks,

Clint Stroupe

2016 Mississippi MAP 3-8 Math & Language Arts Rankings by District

I have listed the Mississippi Assessment Program (MAP) results for the state in Language Arts and Mathematics for grades 3rd – 8th by district and ranked them by percent scoring in the top two levels.  Using the percent in the top two levels seems to be the preferred method of determining the percent scoring a “Proficient-type” score, which is the goal score range.  I feel pretty confident in the data at this point, but please let me know if you spot any errors.

Simply click the link below to access the ranking report:

2015-2016 MAP Rankings by District

Thanks,

Clint Stroupe

*These rankings are for informational purposes only.  True growth information is not available due to the fact this was the first time the MAP assessments were given.  Growth is far more valuable information on determining whether learning took place and to what degree rather than end-of-year scores only, which only tell us where students in a district “ended up” without knowledge of where they “began.”  The state has attempted to equate the 2014-2015 Mississippi PARCC assessment scores with the 2015-2016 MAP assessment scores in order to determine growth for accountability model purposes.  However, the accuracy of such a comparison with only one year’s worth of data on either assessment is questionable to say the least.

2016 Mississippi MAP 3-8 Math & Language Arts Rankings by School

I have listed the Mississippi Assessment Program (MAP) results for the state in Language Arts and Mathematics for grades 3rd – 8th by school and ranked them by percent scoring in the top two levels.  Using the percent in the top two levels seems to be the preferred method of determining the percent scoring a “Proficient-type” score, which is the goal score range.  This is almost identical to the ranking by school that I posted last year for the PARCC assessments in grades 3rd – 8th.  I feel pretty confident in the data at this point, but please let me know if you spot any errors.

Simply click the link below to access the ranking report:

2015-2016 MAP Rankings by School

Last year’s PARCC assessment results using the same ranking system are available by clicking on the following:

2015 Mississippi PARCC Rankings

Thanks,

Clint Stroupe

*These rankings are for informational purposes only.  True growth information is not available due to the fact this was the first time the MAP assessments were given.  Growth is far more valuable information on determining whether learning took place and to what degree rather than end-of-year scores only, which only tell us where students at a school “ended up” without knowledge of where they “began.”  The state has attempted to equate the 2014-2015 Mississippi PARCC assessment scores with the 2015-2016 MAP assessment scores in order to determine growth for accountability model purposes.  However, the accuracy of such a comparison with only one year’s worth of data on either assessment is questionable to say the least.

On Brexit & Our National Unity

“Let us therefore animate and encourage each other, and shew the whole world, that a Freeman contending for Liberty on his own ground is superior to any slavish mercenary on earth.”
― George Washington

It is amazing watching the aftermath of Britain’s vote to leave the European Union. One thing which has struck me is the surprise on some people’s parts as to a group of people wanting to be distinct, independent, and separate from the larger group. This just seems a little ironic when you look back on all of human history with this same scenario occurring over and over again. Whether its the Roman, Greek, Babylonian, Austro-Hungarian, or any of the other countless empires that have attempted bring together people into one group, there has always been a tendency of groups to want to remain distinct. The same can be seen in modern countries such as Czechoslovakia, Afghanistan, or Iraq which were not formed directly through conquest. For a nation to remain unified, it must share some sort of uniting cultural commonality.

The cultural glue may be ideals, language, values, or religion held in common, but there must be something which holds people together or the groups within the larger group who do share some of these things will begin to come together and eventually desire to self-direct their own future. Personally, I think this tendency of people will occur in spite of all of the positive economic or standard of living benefits of remaining in their current unified state. In the case of the United States, in my opinion, it was always a belief in freedom of the individual, agreement on the fundamental principles of our democratic republic as outlined in the Constitution and Declaration of Independence, and agreement on the need for all of us to respect the rule of law governing disagreements we might have with one another. I would argue that this has always allowed us to overcome the tendency to want to break apart and divide on the basis of our differing cultural and religious beliefs. We all shared the common idea that freedom of the individual is of the utmost importance and our form of democratic limited government protected that freedom from others, both within and without, imposing their will upon us as individuals.

The big question, I suppose, for our future is whether we will keep these common beliefs which bind our country together as a unit. If we do not agree upon such overriding ideals which can hold us together, the various differences which have always been present in our country will inevitably weaken us to some degree or another. Our country has always been unique and strong because of our ability to take various peoples from various differing backgrounds and come together because of our love for the beliefs that make the United States a united country based not upon common ethnicity or race, but upon common ideals. I sincerely hope we all do our best to make sure this is always the case for ourselves and future generations by recommitting to these ideals and emphasizing them to our young people as being the glue which has been able to hold us together thus far.

-Clint Stroupe

Fall 2015-Spring 2016 Mississippi Kindergarten Readiness Growth Rankings by School

With the recent release of the 2015-2016 Mississippi Kindergarten Readiness Assessment results, I thought some might be interested in examining the statewide growth rankings by individual school for last year’s assessments to measure Kindergarten readiness given at the beginning and the end of the Kindergarten year.  The data is still very fresh, so please do let me know if you spot anything that does not look right.  This information can be viewed by clicking on the following link:

2015-2016 Rankings by Growth MS Kindergarten Readiness Assessment by School

This information is nice in that it gives us “growth” information to attempt to see what degree of learning might have occurred over the school year.  This is in contrast to the far less desirable end-of-course or end-of-year scores which give us no indicator of what level of achievement the students were at when they actually began the course.

However, there are several problems with attempting to draw too many conclusions from these rankings or the amount of growth used to determine these rankings.  The 2015-2016 Mississippi Kindergarten Readiness Assessment had several characteristics which absolutely need to be considered when looking at this growth and drawing any conclusions from it:

  • This assessment is in every way I can examine the STAR Early Literacy assessment produced by Renaissance Learning which has the contract to produce this test for Mississippi.  Therefore, the test is not designed to assess students who are already quite literate by the end of the year.  Yet, in many of our schools a small number of Kindergarten students are often moved up from STAR Early Literacy to the more advanced STAR Reading assessment to determine growth because of their high Early Literacy scores.  This is important to note because high-performing students might literally “top out” on this readiness assessment and not show significant growth by the end of the year as they hit the “ceiling” of this assessment’s design.  Several students hitting such a ceiling would adversely affect “growth” since little or none could be detected in these students who have exceeded the design of the assessment.  This is important to remember, especially in schools that show higher levels of average beginning and ending scores.  This can be somewhat illustrated by the graph below:GraphThe graph shows a comparison of the average beginning 2015 Fall score for each school (x-axis) compared to the average gains in scale score after the 2016 Spring assessment (y-axis).  The correlation is not very strong overall, but you can visibly see the negative trend that with higher average beginning Fall scale scores the likelihood of being in the top rankings of growth after the ending Spring assessment goes trends downward.  As stated this negative correlation is not very strong overall, but it is a correlation.  More importantly, notice those schools on the upper end (545 and above average Fall score), none of those schools managed to go above 215 points of average growth (scale score gains).  I am not a statistician, so this is far from scientific.  However, I think it points to the strong possibility of the “ceiling” effect to which I am referring and should be kept in mind when examining growth at the individual schools and districts.
  • Along similar lines, the information presented here is for raw scale score growth.  It does not tell us how close the students were on average to hitting the appropriate growth “target” based upon their individual beginning score.  This is important because students (on average) achieve very different magnitudes of scale score growth depending upon their beginning score.  The same assessments given over multiple years and/or the same assessments given to large numbers of students across the country can allow such “growth targets” to be determined giving students and teachers an average amount of growth which would be statistically “typical” for the student to achieve.  In fact, I would assume this data should be available given the STAR Early Literacy assessment is given all over the nation.  For example, hypothetically students beginning the year with a score of 500 might “on average” grow to a score of 674 by the end of the year (174 point growth in raw score).  Alternatively, a student beginning the year at a 674 might “on average” grow to a score of only 710 (36 point growth in raw score).  In this hypothetical situation, a classroom of students who all began the year with a 674 and ended the year with a 725 (49 points of raw score growth) would have “done better” than a classroom of students who all began the year with a 500 and ended the year all scoring a 600 (100 points of raw score growth).  Those in the class beginning at 674 scored far more growth than what would be typical for their peers compared to those in the class beginning with a 500 whose growth was not as high as their peers, even though in terms of pure score growth (such as the data contained in this ranking) the 674 class did not have as much pure score growth!  Instead of magnitude of score growth, this comparison to an average “growth target,” based upon the beginning of year score would be a much better indicator of learning progress.  Such analysis and comparison to what is typical is important to factor into account before drawing too many conclusions about one school outperforming another in terms of growth, especially if those schools had very different average Fall (beginning of year) scores.

All of these factors should certainly be considered for the 2015-2016 growth results.  If the assessment results are truly going to be used to compare schools and districts head to head in regards to “growth” then hopefully the “top end ceiling” issue of this Star Early Literacy assessment will be addressed.  That and/or an analysis giving typical growth for students with the same beginning score and a formula to weigh the growth “percentage” achieved based on these beginning scores is the only way this head to head comparison of growth is in any way accurate.

Thanks,

Clint Stroupe

*The scatter plot graphic shown above can be downloaded for a better view by clicking the link below.  Please feel free to critique my rudimentary knowledge of correlations and the like.  Hopefully, I did not butcher it too badly.

Fall Scores vs Growth Scatter Plot

Mob Mentality

Excellent thoughts on the Biblical front by Adam Miller!

PDPreacher's Place

May 6 – Mark 10-16

In our study today, we see two events that see completely contradictory. In Mark 11, we see Jesus entering Jerusalem from the east, riding on a donkey’s foal. He is welcomed by a crowd who are cutting down branches to lay in the road, and crying out “Hosanna! ‘BLESSED IS HE WHO COMES IN THE NAME OF THE LORD!” This statement was a clear indicator that in Jesus, the mass of people gathered in Jerusalem for the feast of unleavened bread (Passover) saw a Savior, the Messiah Himself. The second event occurs in chapter 15, in the Praetorium, as Jesus stands before the governor Pilate. As Pilate offers to release Jesus without penalty, the priests stir up the crowd, and they begin to plead with Pilate to execute Jesus. Those haunting words ring out in unison: “Crucify Him!! Crucify Him!!” Where were the cries of…

View original post 577 more words

The Need for Stable, Growth-Based Accountability

“Privileged groups work for greater power consolidation through favoritism.”
― Bryant McGill, Voice of Reason

School accountability models that have unreachable goals and are not growth based have one purpose, to confuse the public and the schools themselves. They yield negative results for schools that are not truly reflective of student learning nor giving meaningful information to anyone. Such models serve no purpose other than a political one to make school systems appear to under perform to achieve political goals, regardless of what is truly occurring in a school.

Likewise, when states change their models every year as well as the assessments used in these models, the effort is a meaningless waste of time and funds, lacking any meaningful results. A state would be better off without an accountability system rather than one which is constantly changing as both scenarios produce no accurate data to be used in meaningful ways. At least the absence of any accountability system whatsoever does not waste tax dollars on tests given without a real purpose and instructional time wasted on such testing.

Growth-based, objective assessments of student performance, achievable accountability models that incorporate such growth, and systems of accountability which are stable over multiple years are the only meaningful types of statewide accountability. A state cannot afford not to have such a quality system in place that is the same for both public and charter schools. Yet, no system at all would be preferable to one which lacks these essential elements.

When accountability models have no meaning due to their lack or consistency or achievability, we return to a time period where the public is largely ignorant of which schools are actually producing growth in students. We also return to a time that a few school systems were incredibly lucky enough to have honorable and intelligent administrators and teachers willing to face up to political pressure make decisions based upon optimal learning of students. However, for the many school systems, no accountability, to one degree or another, returns to the days where the school’s main goal was not to attract any attention, to keep the “right” parents in the community happy, to keep property taxes low regardless of need, and to make sure it provided jobs and promotions for the well-connected of the community instead of those who produced the most gains for the student.

Some educators would like to go back to the “good old days” prior to any testing or accountability. Yet those old days were a world where the best schools, the best teachers, and the best administrators were largely decided upon for subjective reasons such as their likability to those above them and the perception of those around them regardless of facts. Even with evaluations based upon observation every educator knows another teacher or is a teacher themselves who is able to “put on the dog and pony show” for an administrator’s view often at the drop of a hat. Many say they detest politics and popularity contests in our schools today, but without objective accountability measures for many in our school systems such subjectivity will be the only means of criteria left for making decisions. I am for accountability models and their assessments as necessities, but only for those designed to actually work. Without it there is absolutely no pressure on anyone to put people into positions based on their ability to produce as opposed to simply the whim of those making such decisions and the influence of others upon them.

-Clint Stroupe