Ben Backes and James Cowan from CALDER have published a working paper on the differences between computer- and paper-based tests.
In 2015, Massachusetts introduced the new PARCC assessment. School districts could choose whether to use the computer or paper versions of the test, and in 2015 and 2016, districts were divided fairly evenly between the two. The authors use this division to compare results for pupils in Grades 3–8 (Years 4–9).
Pupils who took the online version of PARCC scored about 0.10 standard deviations lower in maths and about 0.25 standard deviations lower in English than pupils taking the paper version of the test. When pupils took computer tests again the following year, these differences reduced by about a third for maths and by half for English.
The study also looked at whether the change to computer tests affected some pupils disproportionately. There were no differences for maths, but for English there was more of an effect on pupils at the bottom of the achievement distribution, pupils with English as an additional language and special education pupils.
The authors point out that these differences not only have consequences for individual pupils, but for other decisions based on the data, including teacher and school performance measures and the analysis of schoolwide programmes.
Source: Is the pen mightier than the keyboard? The effect of online testing on measured student achievement (April 2018), National Center for Analysis of Longitudinal Data in Education Research, Working Paper 190