Is the pen mightier than the mouse?

Ben Backes and James Cowan from CALDER have published a working paper on the differences between computer- and paper-based tests.

In 2015, Massachusetts introduced the new PARCC assessment. School districts could choose whether to use the computer or paper versions of the test, and in 2015 and 2016, districts were divided fairly evenly between the two. The authors use this division to compare results for pupils in Grades 3–8 (Years 4–9).

Pupils who took the online version of PARCC scored about 0.10 standard deviations lower in maths and about 0.25 standard deviations lower in English than pupils taking the paper version of the test. When pupils took computer tests again the following year, these differences reduced by about a third for maths and by half for English.

The study also looked at whether the change to computer tests affected some pupils disproportionately. There were no differences for maths, but for English there was more of an effect on pupils at the bottom of the achievement distribution, pupils with English as an additional language and special education pupils.

The authors point out that these differences not only have consequences for individual pupils, but for other decisions based on the data, including teacher and school performance measures and the analysis of schoolwide programmes.

Source: Is the pen mightier than the keyboard? The effect of online testing on measured student achievement (April 2018), National Center for Analysis of Longitudinal Data in Education Research, Working Paper 190

Teaching assistants make a positive difference on pupil outcomes

A working paper from the National Center for Analysis of Longitudinal Data in Education Research finds evidence that teaching assistants can have positive effects on pupil outcomes.

Charles T. Clotfelter and colleagues examined the role of teaching assistants and other non-teaching staff in elementary (primary) schools in North Carolina to identify causal effects on pupils’ test scores in maths and reading.

Positive effects were identified on test scores in reading, but for maths, positive effects were only found for minority pupils’ test scores. For both reading and maths, the effects on minority pupils’ test scores were larger than the effects on the test scores for white pupils.

The report also found that more teachers (and therefore smaller class sizes) had a number of positive effects on test scores, particularly for minority pupils, and were also associated with lower absentee rates and a lower probability of high rates of in-school suspension.  

Source: Teaching assistants and nonteaching staff: Do they improve student outcomes? (2016) National Center for Analysis of Longitudinal Data in Education Research (CALDER)

Can a test predict teacher success?

The edTPA is an assessment in the US, introduced in 2013, that evaluates prospective teachers’ classroom performance. It is used by more than 600 teacher education programmes in 40 states, and passing it is a requirement for licensure in 7 states. In an attempt to discern whether the test can accurately determine if teacher candidates who achieve higher scores on this test help their students better than lower-scoring candidates, The National Center for Analysis of Longitudinal Data in Education Research (CALDER) conducted the first independent study of edTPA, and found mixed results.

The study followed 2,300 teacher candidates in Washington State who took the edTPA in 2014. Their scores were correlated with their students’ standardised test scores in reading and maths. The study found that new teachers who passed the edTPA on their first try increased their students’ reading achievement scores more than new teachers who didn’t pass edTPA on their first attempt. There were no differences regarding the effects on students’ math scores.

The authors discuss the complicated implications of these findings for policy and practice. For example, they state that new teachers who fail the test the first time may ultimately become high-performing teachers, and warn of screening them out of the workforce.

Source: Evaluating Prospective Teachers: Testing the Predictive Validity of the edTPA (2016), National Center for Analysis of Longitudinal Data in Education Research (CALDER)

No added value from Master’s degrees

A recent working paper from CALDER uses longitudinal administrative data on teachers and pupils from North Carolina to examine whether a teacher having a Master’s degree has an impact on their pupils’ outcomes.

The authors used data on pupils and teachers from 2005 to 2011, including pupils’ demographic and achievement data. The study concludes that teachers with Master’s degrees are no more effective than those without. The only consistently positive effect of attaining a Master’s degree was found to be lower pupil-absentee rates in middle school.

Source: Do Master’s Degrees Matter? Advanced Degrees, Career Paths, and the Effectiveness of Teachers (2015), National Center for Analysis of Longitudinal Data in Education Research.