ContactSearchSite MapHome
School ImprovementThe MCASCommunityResources
Home School Improvement >  Annual Effectiveness Reports
MCAS 1999
PDF VersionPrint Version
Section One
Section Two
Appendix A: Listing of Effective School Districts
Appendix B: Deriving the Effectiveness Index
Back to Annual Effectiveness Reports

Written by Robert D. Gaudet, Senior Research Analyst
University of Massachusetts Donahue Institute - MARCH 2000

This research is designed to identify school districts in Massachusetts whose student test scores exceed the scores predicted by the district's demographic characteristics. The work is not intended to rank districts' performances but rather to highlight the efforts of districts whose students are exceeding what they would be expected to achieve on statewide standardized tests. The goal is to enable other districts to study and learn from the efforts of systems identified as effective in this analysis.

The Effectiveness Index: Year II

The first analysis of school district effectiveness came out in February of 1999 and evaluated the 1998 MCAS in terms of district demography. The central tool of that analysis was the Effectiveness Index methodology that examines the relationship between selected demographic characteristics and educational outcomes. These characteristics include: average education level, average income, poverty rate, single-parent status, language spoken, and percentage of school-age population enrolled in private schools. These variables were chosen because they correlate with achievement and because the education literature identifies them as connected to academic performance.

Researchers ranging from James Coleman in the 1960s to James Comer in the 1990s have demonstrated that community demographics play a major role in how well children do in school. The Effectiveness Index provides a means of isolating the role played by community characteristics concerning student performance on statewide educational assessments. With a community’s achievement context factored into its test results, it is possible to know how much value school systems add to demographic expectations.

The Effectiveness Index identifies school districts that add value to the learning readiness of their students as indicated by higher-than-predicted test scores. Identifying such systems is a first step to determining if they are indeed providing more effective educational services to their students. Identifying best practices in effective systems that are demographically similar to less effective systems may help those systems improve their school services.

This second edition incorporates several new elements into the process:

1) More districts are included. Last year's analysis was confined to communities of over 6,000 population. This year's evaluation considers any school district where at least 50 students took the MCAS exam in a grade. Last year's work covered about 93% of the students in the Commonwealth. This year's extends to about 97% of the student population taking MCAS.

2) Regional school districts are included in this year's work. Last year, only communities, not regional districts which are comprised of several communities, were considered. That meant that individual communities were evaluated outside of their regional school identity. For example, last year Pembroke was evaluated as Pembroke in 10th grade even though Pembroke is part of the Silver Lake Regional District for high school. This year's analysis looks at school district performance on a regional district basis where appropriate. Community demographics have been factored to reflect the regional school district characteristics.

Observations

• We see many repeat performers. Many districts that outperformed their demography last year did the same this year. These include Woburn, Harvard, North Reading, and Norwood. This is not surprising; a system that had organized itself to enhance student achievement in 1998 is likely to have kept that up in 1999.

• Some of the new over-performers made the list this year because they were not large enough to have been considered last year. Orleans, a strong over-performer this year, was too small to be included in last year's work. With smaller districts included this year, Orleans is in the mix.

• Districts that over-perform their demography tend to be middle-class or demographically advantaged communities. Generally, upper-demography communities are two to three times more likely to over-perform than communities that are of lower demography. This is unsettling in that middle and upper-middle class communities do not need to over-perform their demography to meet state achievement standards. Based on two years of MCAS, most of their students perform well enough now to pass the MCAS graduation requirements. Districts that are disadvantaged need to overcome their demography in order to lift more of their students into success on MCAS, but so far these districts are having a hard time outperforming their community characteristics.

• Districts that over-performed their demography did so without any apparent benefit from high per-pupil school spending or high levels of new state education reform aid. Generally, over-performers spent at or below state average and were not the recipients of generous amounts of Chapter 70 (education assistance and reform) aid. This fact is interesting in that providing additional funding is a major reform tool of the Education Reform Act of 1993. Based on this analysis, that extra funding has not necessarily purchased the systemic change that enables systems to help their students perform above their demographic expectations.

• So far after seven years of reform funding, there is little evidence that the schools have changed in any fundamental ways. MCAS scores were relatively flat from 1998 to 1999. This is of concern because EducationWeek pointed out that "Of the states in the early stages of testing, only Massachusetts failed to post significant gains in the second year of its new assessment." (David J. Hoff, EducationWeek, Jan 26, 2000; p. 12; "Testing's Ups And Downs Predictable.") With flat early results, the challenge of improving results, especially for demographically disadvantaged systems, is daunting.

The Importance of Identifying Over-performing Systems

Identifying systems that over-perform their demography is important in that such systems may have valuable lessons to offer similar systems in their efforts to boost student achievement. Most of the over-performers were not among the demographically disadvantaged systems. This means that when we identify low demography/over-performing systems, we need to study them carefully and see if they indeed do have lessons to teach their peers across the Commonwealth. After two years of MCAS, we know that many of our less advantaged districts have far to go to meet new state standards, so helping them move ahead is critically important to ultimate success.

The real battle in education reform is to change the way disadvantaged systems teach their children. Approximately 60% of students in very disadvantaged systems are not now in a position to meet state graduation standards. Those systems are home to 266,000 students, 31% of the state's student population. These are future citizens who are struggling to learn how to read, write, and do basic math at a level sufficient to meet state standards and to succeed in life. Identifying peer systems that have figured out how to help their students over-perform their demography is essential to making education reform work in Massachusetts.

The MCAS

Testing plays an important role in most of the contemporary school reform efforts in the United States. The Massachusetts education reform effort is no exception. Its testing vehicle is the Massachusetts Comprehensive Assessment System or, as it is commonly known, the MCAS.

The MCAS is a battery of tests that is given each year to students in Grades 4, 8, and 10 in each school district. The MCAS is aligned with a series of curriculum frameworks that are being developed by the state Department of Education. MCAS covers such academic subjects as math, science, and literacy skills, with more subjects to be added later. The test scores are broken down by individual student, school, and district. The scores for individual students are available to their parents, teachers, principals, and superintendents. The scores for entire schools and districts are available to the public.

The chief objective of the state's education reform initiative is to enable public school students to achieve a certain level of knowledge and skill. The Massachusetts Department of Education has established this level by setting out what students are expected to learn in each basic subject. School districts are supposed to see to it that their students learn what they are expected to learn. The purpose of the MCAS is to gauge periodically how students are doing as they try to achieve this level of knowledge and skill.

With the MCAS, the state has, for the first time in its history, an evaluation mechanism that measures how much progress students are making towards meeting established goals. At the same time, individual schools districts are urged to anticipate and complement the MCAS by developing their own parallel methods of assessing how their students are doing. Thus, the education reform effort uses assessment as a way to help all students move toward a high level of academic achievement.

Just as this overall effort views higher student achievement as its end, it views the improvement of the public schools as its chief means to achieve this end. What happens in school is by no means the only or even the leading influence on how pupils currently perform on standardized academic tests. However, what happens in school obviously is the only means that is currently within the control of the schools themselves. So it is the only means of reform that is at the disposal of the education improvement effort as it now exists.

Improving Our Schools

The more the test scores can be used to inform decisions about how to alter what happens in school, the better the chances to make the schools more effective in helping their students learn more. Properly used, the results can pinpoint which approaches to teaching and learning are working and which are not. The MCAS also includes an array of diagnostic tools that let teachers and administrators spot areas where students perform poorly, so that the staff can work with the students to mend the weaknesses.

Consequently, the essence of education reform in Massachusetts can be summed up in a few words: Better student performance, through more effective schools.

However, for the MCAS to fulfill its intended role in the current education reform effort, there at least two important conditions that have to be met.

FIRST, the tests, and other assessments, must be fair and accurate. They must measure what children have learned, rather than just their social or economic background. They must not be biased for, or against, any group of students.

SECOND, the tests must be used to make the public schools more effective. Thus, the scores should drive an ongoing analysis of what makes the school experience effective. They must provide teachers with a critical piece of information about the potential learning problems and possibilities of individual students. And the information must be used as a basis for helping all students to do better.

To meet the second condition, we must be able to use the MCAS scores as one tool to discern the effectiveness of our schools. We must be able to establish how effective they are today, and to track the rise or fall of their effectiveness in the future. Thus, finding ways to measure school effectiveness is essential to education reform.

Measuring Effectiveness

Student academic performance, including how students do on MCAS tests, is influenced by two broad sets of factors: school factors and non-school factors. The first entail what happens in school, and thus what is within the control of the school district itself. The second entails conditions outside the schools, such as the demographic profile of the students and the community. As we look at a given district's average score on an MCAS test, we have to be able to discern how much of the score is tied to school factors, and how much of the score is explained by non-school factors.

How well do the school design and the curriculum promote learning for all? Are teachers top-notch professionals who have both the skills and commitment to teach all students? Are professional development activities rigorously aligned with efforts to increase student achievement? Is there strong, solid leadership in the school? Are there high expectations for all? Are parents full partners in their children's education? Are there adequate resources to do the job? These are all questions about school factors.

In the research reported in this paper, non-school factors consist largely of the overlapping demographic conditions of family life and community life. This study utilizes six such conditions in a given school district: its median level of educational attainment; its median income level; its percentage of households above the poverty line; its percentage of single-parent families; its percentage of non-English-speaking households; and its level of private school enrollment. Statistical analysis shows that these factors form much of the non-school influence on how the state's students do on such standardized tests as the MCAS.

As we all know, students in advantaged districts tend to get higher standardized test scores than students in disadvantaged districts. Thus, if a district's students get a high average score on an MCAS or other standardized tests, the test score by itself does not tell us how much of the score is explained by school factors and how much is explained by non-school factors. A high score might be tied more to advantaged demography than to what actually happens in the district's schools. The score by itself is not a sound guide to how effective the school district is.

We cannot begin to zero in on just how effective the school district itself is unless we can distinguish between the respective influences of the two types of factors. Only then can we discern how effectively the district itself performs, and how much it contributes to its students' average performance on the MCAS.

The Effectiveness Index (EI) provides insight into this distinction, and consequently provides some measure of the school district's contribution to its student performance. Thus, it supplies a piece of crucial insight as to which schools are more effective.

For a given district, the Effectiveness Index gauges the impact that school factors have on the average MCAS score. The greater the positive impact of the school factors, the higher the district's Effectiveness Index will be.

The Effectiveness Index is calculated in the following manner: For a given district, the six demographic factors are used as the basis for projecting a likely average score on the MCAS. The demographically-likely score is then compared to the average score that the students in the district actually received. The Effectiveness Index is the number that represents the difference between the likely score and the actual score.

If the number is negative - if the actual score is lower than the likely score - then this suggests that what is happening in the schools in the district is not enabling its students to perform beyond the demographic expectations for them. If the number is a positive number - if the actual score is higher than the likely score - then this suggests that what is happening in the schools is helping the district's students to surpass the demographic expectations for them. (For a fuller account of the development of the Effectiveness Index, please see Appendix B.)

¹Per pupil expenditure [PPE] is a school factor, but our measures of it are not always reliable. There is no standard accounting procedure for establishing PPE. For example, some systems might include teacher retirement costs, capital costs, federal funds, and long-term disability obligations in their per-pupil spending figure. Others might not. Therefore, comparisons across districts are difficult to make.

²Other family an community conditions are crucial to students success, but are hard to observe and measure. One would have to monitor many families and communities closely over time to discern how family and community behavior affect school outcomes. How many books are read in the family? How much time is taken up by TV-watching? How do the community's adult's treat children other than their own? Does the community mentor its young people? It is hard to get reliable answers to such questions. But we know that the children of advantaged families and communities are more likely on average to have such resources and support, and children of less advantaged situations are less likely to have such. Therefore, we use gross measures of such support as a proxy for answers to the more specific questions that are so hard to pursue.

         Home | School Improvement | Community | The MCAS | Resources | Site Map | Search | Contact
         © 2001 Education Benchmarks | Website Design by Chris Bell