Paper-Based Or Computer-Based Testing? Whichever Meets Student Needs

usarmycorpofengineerssavannahdistrict-c

Paper-Based Or Computer-Based Testing? Whichever Meets Student Needs 

contributed by Chad Barrett and Stuart Kahl

As the shift to digital continues in schools and classrooms, it’s easy to assume that computer-delivered assessments are the future of testing. While many education leaders extol the efficiency of this new model, paper testing shouldn’t be forgotten quite yet.

A Difference Between Modes

Benjamin Herold recently reported in Education Week that students who took the 2014–15 PARCC exams on a computer tended to score lower than those who took a traditional pencil-and-paper test. This pattern was especially prevalent in English language arts and middle- and high-school math.

The performance differences between paper and online testing are often attributed to students’ comfort level with technology. If this were the case, a relatively simple solution would be to implement classroom practices that familiarize students with assessment technology. Although technology access might play a role in performance differences, another possibility to consider is how students might interact differently with screens than they do with paper.

Interactions with screens, whether on a computer, tablet or smartphone, often are filled with distractions such as ads, emails and notifications. Naomi Barton’s recent book Words Onscreen: The Fate of Reading in a Digital World concludes that people tend to skim or stop reading in the middle of a lengthy piece of text on a web page. If students comprehend text differently whether reading on a screen or on paper, this certainly would have implications on testing performance.

Current research on the differences between screen reading and paper reading is inconclusive. Although we don’t know the specifics about these differences, we can surmise that the experience of reading and testing on a screen is not the same as reading and testing on paper. We should not expect the same results from tests that are delivered through distinct modes.

Computer-delivered tests that are totally machine-scored also differ from other modes of testing because they tend not to tap deeper learning as effectively. Deeper learning refers to the ability to apply foundational knowledge and skills to the solution of bigger, real-world problems. Performance assessments, often steps in project-based learning, generate student work products such as papers, models, and presentations that do better at demonstrating higher-order skills.

Such assessments, instead of using technology for test delivery, might make use of presentation software, ePortfolios, and distributed scoring systems.

Next Steps As An Educator

The higher scores on paper tests draw into question whether we should continue down the path of tech-delivered assessments. With benefits such as technology-enhanced items and reduced scoring time for teachers, online assessments certainly have their place in assessment pedagogy. Ideally, more research should be conducted on the student performance differences between online and paper tests. Once these differences are understood, tests can be better designed and implemented to measure what they intend to measure.

Until then, educators can work with students to improve classroom assessments and prepare them for large-scale assessments. Here are three suggestions to work on paper and online testing skills with your students.

1. Offer a variety of classroom assessments

For the time being, paper and online tests are both here to stay. Providing students with the opportunity to engage with multiple testing modes is the best way that we can prepare them for large-scale assessments—as well as for higher education. Teachers can deliver both traditional paper assessments and online assessments with free websites, software and apps.

2. Conduct a “think-aloud” testing session

Think-aloud studies help researchers and educators understand the thought processes students have when they complete an educational task. Students talk about how they read and responded to test questions during think-aloud studies. Educators can mimic this process in class by asking students such questions when they take in-class paper and online assessments. The results from this classroom practice can inform teachers how to help students perform better across various testing modes.

3. Keep individual needs in mind

Individual students’ technology access might play a role in the differences between testing modes. This is just one aspect of a student’s background that might impact performance differences. Teachers need to work one-on-one with students to figure out what might be challenging to them as they complete assessments. Having these conversations throughout the school year will help students improve across testing modes.

Conclusion

As educators and test developers continue to navigate various testing modes and designs, it will be vital to track how each decision meets the testing needs of teachers, as well as students. There will never be one test that meets every need a district, teacher or student might have. As research continues to inform testing practices, educators must keep in mind which test can most accurately measure the intended construct.

Will there ever be a clear answer as to whether paper or online testing is inherently better than the other? It’s possible there won’t be. However, we must continue asking such questions in order to improve testing practices for K–12 students.

Chad Barrett is Measurement Services Senior Advisor at Measured Progress, a not-for-profit assessment organization. He has more than 15 years of educational assessment experience, particularly in content development and program design.

Stuart Kahl is Founding Principal and former CEO of Measured Progress. With 35 years of experience in large-scale assessment, Dr. Kahl is a frequent speaker at industry conferences and serves as a technical consultant to education agencies.

Paper-Based Or Computer-Based Testing? Whichever Meets Student Needs; image attribution flickr user usarmycorpofengineers