Ben Backes and James Cowan from CALDER have published a working paper on the differences between computer- and paper-based tests.
In 2015, Massachusetts introduced the new PARCC assessment. School districts could choose whether to use the computer or paper versions of the test, and in 2015 and 2016, districts were divided fairly evenly between the two. The authors use this division to compare results for pupils in Grades 3–8 (Years 4–9).
Pupils who took the online version of PARCC scored about 0.10 standard deviations lower in maths and about 0.25 standard deviations lower in English than pupils taking the paper version of the test. When pupils took computer tests again the following year, these differences reduced by about a third for maths and by half for English.
The study also looked at whether the change to computer tests affected some pupils disproportionately. There were no differences for maths, but for English there was more of an effect on pupils at the bottom of the achievement distribution, pupils with English as an additional language and special education pupils.
The authors point out that these differences not only have consequences for individual pupils, but for other decisions based on the data, including teacher and school performance measures and the analysis of schoolwide programmes.
Source: Is the pen mightier than the keyboard? The effect of online testing on measured student achievement (April 2018), National Center for Analysis of Longitudinal Data in Education Research, Working Paper 190
Joseph Hardcastle and colleagues conducted a study to compare pupil performance on computer-based tests (CBT) and traditional paper-and-pencil tests (PPT). More than 30,000 pupils in grades 4–12 (Years 5–13) were assessed on their understanding of energy using three testing systems: a paper and pencil test; a computer-based test that allowed pupils to skip items and move freely through the test; or a CBT that did not allow pupils to return to previous questions.
Overall, the results showed that being able to skip through questions, and review and change previous answers, could benefit younger pupils. Elementary (Years 5 and 6) and middle school (Years 7–9) pupils scored lower on a CBT that did not allow them to return to previous items than on a comparable computer-based test that allowed them to skip, review, and change previous responses. Elementary pupils also scored slightly higher on a CBT that allowed them to go back to previous answers than on the PPT, but there was no significant difference for middle school pupils on those two types of tests. High school pupils (Years 10–13) showed no difference in their performance on the three types of tests.
Gender was found to have little influence on a pupil’s performance on PPT or CBT; however, pupils whose primary language was not English had lower performance on both CBTs compared with the PPT.
Source: Comparing student performance on paper-and-pencil and computer-based-tests. Paper presented at the 2017 AERA Annual Meeting, 30 April 2017. American Association for the Advancement of Sciences.