Maybe Testing Isn’t The Problem After All

After four years and hundreds of millions of federal Race to the Top dollars spent creating the Partnership for Assessment of Readiness for College and Careers (PARCC) test—an assessment of the extent to which students are mastering the knowledge and skills necessary for life after high school—it’s disappointing to hear that an increasing number of states, including my home state of Colorado, are already thinking of scrapping it.

Now, it’s not hard to understand why everyone wasn’t happy with the first round of results, released in the fall of 2015. There were technological glitches and scores were consistently lower than they had been with previous state tests.

Was PARCC too long or too difficult, as some critics have claimed? Or do these results perhaps call into question the validity of those previous scores and the tests used to arrive at them?

I’m a classroom teacher and a supporter of the Common Core State Standards. But I also have a vested interest in the long-term success of the PARCC assessment because I was involved in the process of creating the test. As a member of the PARCC Educator Leader Cadre I was among those tasked with item development, review, outreach and more.

But putting aside my personal feelings about wasting all that money and time, it’s hard not to imagine that there are politics at play.

Although disagreement about PARCC is part of a larger debate over measuring the performance of students and teachers, scrapping the test—only to replace it with a new one to serve the same purpose—will result in additional cost, time and confusion, all of which are unnecessary if people look at the test for what it is: an indicator of student progress.

In 2009, the Council of Chief State School Officers (CCSSO) and the National Governors Association (NGA) came together in unprecedented unity around the realization that consistent and high standards for student learning were needed in all states. By 2010, 42 states, the District of Columbia, four territories and the Department of Defense Education Activity, had signed on to develop a common set of academic standards for the knowledge and skills necessary for success after high school.

Following the adoption of the Common Core State Standards, the federal government offered Race to the Top Funds to certain states to develop high-quality assessments to measure how well the new standards were being met.

According to the U.S. Department of Education, “These assessments are intended to play a critical role in educational systems [and] provide administrators, educators, parents and students with the data and information needed to continuously improve teaching and learning.”

Using the funds allocated through Race to the Top, PARCC and the Smarter Balanced Assessment Consortium (SBAC) brought together teams of educators from colleges and universities, public schools, departments of education, and elsewhere to envision and develop a standardized test to be administered in participating states as a comprehensive and reliable test for how well academic standards were being met in schools across the country.

This time-consuming process, which involved considerable collaboration and revision, was carefully reviewed, evaluated and revised according to feedback from teachers, administrators and the higher education community.

Do those opposed to the test have a better idea for how to create a new test—one that will more accurately measure students’ progress toward the agreed-upon higher standards?

No. Any test will have to align to the new standards, nearly all of which mirror the language and expectations of the Common Core.

Would a test that is considered to be more in line with local populations look considerably different from the current test or be fundamentally better?

No. In fact, in response to requests from various states, teachers and parents, PARCC has made it possible to customize the assessment for the location where it’s administered.

Will making the questions easier give us better or more accurate information about how students are doing or how teachers should adjust their lessons to better prepare their students for the next level?

No. In fact, lowering our expectations for students only increases the achievement and opportunity gaps.

Let’s all take a deep breath and consider that new initiatives are like seeds in a garden. Tossing them in and expecting immediate growth is unreasonable. But giving them time, attention and nourishment will lead to plentiful harvests. And while it’s understandable that parents and teachers, community members, and other stakeholders are disappointed in the initial PARCC test scores, an all-out attack on the test itself is not the solution.

Instead, it’s the responsibility of us all, including our political decision-makers, to make mature, reasonable and fiscally responsible decisions about how to cultivate better schools.

Pandering to those who are reacting to scores may be a good way to get elected, but it’s not going to get us the results we know are possible.

You can read more about accountability on educationpost here. The article was originally shared here.

Jessica is a fifth-grade teacher and professional development facilitator in the Weld RE-1 school district, which serves rural students in Platteville, Gilcrest and LaSalle, Colorado. Jessica develops comprehensive teacher training programs around Common Core State Standards for the National Math and Science Initiative and consults with local districts and teachers on how to effectively implement the standards.

She is a member of the Colorado PARCC Educator Leader Cadre and the Colorado Core Advocates, as well as a Core Advocate with the Student Achievement Partners and fellow with America Achieves and the Collaborative for Student Success. She has hosted multiple webinars and video clips in support of the college- and career-readiness standards in states across the country. She is incredibly passionate about working with students, and believes that teachers are the key to their success. She blogs at Moore Achievement.

Maybe Testing Isn’t The Problem After All; adapted image attribution flickr user duncanhull