Planning ahead for summer

Heather L Schwartz and colleagues from the RAND Corporation have released a final report on a six-year study of the National Summer Learning Project, an initiative from The Wallace Foundation that was implemented in 2011 in five urban school districts in the US. The summer programmes in these districts were district-led, voluntary summer learning programmes that featured both academic teaching and enrichment opportunities to improve outcomes for low-income pupils.

The overall study combined a randomised controlled trial with correlational analysis and implementation research to examine whether voluntary, district-run summer learning programmes can improve academic, behavioural, and social and emotional outcomes for low-income, urban children in both the short and long terms. The study followed approximately 5,600 pupils from third to seventh grade (Years 4 to 8). Data included surveys, observations and test data.

Findings showed that pupils who received a minimum of 25 hours of mathematics teaching in a summer performed better on the subsequent state maths test, and those receiving 34 hours of English lessons performed better on the subsequent state English language assessment.

These outcomes need to be viewed with caution, however, as pupils who actually attended summer school, as opposed to those who signed up but did not attend, are likely to be more highly motivated and better achieving, introducing possible bias.

Based on their research, the authors offer several recommendations for planning for summer learning, including:

  • Commit in the autumn to a summer programme, and start active planning by January with a programme director who has at least half of his or her time devoted to the job.
  • Prior to the start of the summer programme, professional development for summer teachers should include specific guidance on use of the summer curricula, minimising loss of teaching time, and on checking for pupil understanding.
  • Operate the programme for five to six weeks with three to four hours of academic lessons per day.

A more detailed and comprehensive list of recommendations can be found in the report.

Source: Getting to work on summer learning. Recommended practices for success, 2nd edition (2018), RAND Corporation

Jury still out on Teach for America

A new Campbell Collaboration systematic review has been published, which looks at the impact of Teach for America on learning outcomes.

Teach for America (TFA) is a nationwide teacher preparation programme designed to address the shortage of effective teachers, specifically in high-poverty rural and urban schools across the United States. The systematic review by Herbert Turner and colleagues considered the impact of TFA-prepared teachers relative to novice teachers, and alumni relative to veteran teachers. The impacts studied were for K–12 (Years 1–13) pupil outcomes in maths, English and science.

A total of 24 studies were eligible for the review. However, once the research design, study quality and comparison groups were considered, this was reduced to four qualifying studies.

The review found no significant effect on reading by TFA teachers in their first or second year teaching elementary grades (Years 1–6) when compared with non-TFA novice teachers. There was a small positive impact for pre-K to grade 2 (Reception to Year 3) teachers on reading, but not on maths. However, given the small evidence base, the review counsels that these results should be treated with caution.

Source:  What are the effects of Teach for America on math, English language arts, and science outcomes of K–12 students in the USA? (June 2018), A Campbell Systematic Review 2018:7

Do physically active lessons improve pupil engagement?

A study published in Health Education and Behavior looks at the effects of introducing physically active lessons into primary school classes. Emma Norris and colleagues used the Virtual Traveller (VT) intervention to evaluate whether physically active lessons had any effect on pupil engagement, physical activity and on-task behaviour.

Virtual Traveller is a programme of pre-prepared physically active lesson sessions delivered using classroom interactive whiteboards during regular lessons. A total of 219 children aged 8- to 9-years-old from 10 schools in Greater London took part in the cluster-randomised controlled trial. Children in the intervention schools received 10-minute VT sessions three times a week, for six weeks, during maths and English lessons. To assess the effectiveness of VT, pupils’ physical activity levels, on-task behaviour and engagement were measured at baseline (T0), at weeks two (T1) and four  (T2) of the six-week intervention, and at one week (T3) and three months (T4) post-intervention.

Pupils in the intervention group showed more on-task behaviour than those in the control at T1 and T2, but this was not maintained post-intervention. No difference in pupil engagement between the control and intervention groups was observed at any time point. VT was found to increase physical activity, but only during lesson time.

Source: Physically active lessons improve lesson activity and on-task behavior: a cluster-randomized controlled trial of the “Virtual Traveller” intervention (March 2018), Health Education & Behavior DOI: 10.1177/1090198118762106

Is the pen mightier than the mouse?

Ben Backes and James Cowan from CALDER have published a working paper on the differences between computer- and paper-based tests.

In 2015, Massachusetts introduced the new PARCC assessment. School districts could choose whether to use the computer or paper versions of the test, and in 2015 and 2016, districts were divided fairly evenly between the two. The authors use this division to compare results for pupils in Grades 3–8 (Years 4–9).

Pupils who took the online version of PARCC scored about 0.10 standard deviations lower in maths and about 0.25 standard deviations lower in English than pupils taking the paper version of the test. When pupils took computer tests again the following year, these differences reduced by about a third for maths and by half for English.

The study also looked at whether the change to computer tests affected some pupils disproportionately. There were no differences for maths, but for English there was more of an effect on pupils at the bottom of the achievement distribution, pupils with English as an additional language and special education pupils.

The authors point out that these differences not only have consequences for individual pupils, but for other decisions based on the data, including teacher and school performance measures and the analysis of schoolwide programmes.

Source: Is the pen mightier than the keyboard? The effect of online testing on measured student achievement (April 2018), National Center for Analysis of Longitudinal Data in Education Research, Working Paper 190

Missing the mark at GCSE English

Getting above or failing to reach thresholds in high-stakes public examinations is an important feature of success or failure in many people’s lives. One well-known example is the need to obtain a grade C in GCSE English. New research by the Centre for Vocational Education Research (CVER) analyses the costs of narrowly failing, or only just achieving, a grade C in English GCSE.

Stephen Machin and colleagues tracked the progress of more than 49,000 pupils who took their English GCSE in 2013 and got a grade C or D, and then looked at how they progressed over the next three years. Results showed that pupils of similar ability have significantly different trajectories depending on whether they just pass or fail the exam. Pupils who continue in education post-16 may find that the options open to them are more limited, and may end up in settings with less-well performing peers. Those who narrowly miss out on a grade C have a lower probability of enrolling in a higher-level qualification – by at least nine percentage points. Furthermore, pupils who narrowly miss out on a grade C are more likely to drop out of education at age 18 (by about four percentage points), and are at increased risk of poorer prospects in the long term.

Source: Entry through the narrow door: the costs of just failing high stakes exams (April 2018), Centre for Vocational Education Research (CVER) Research Discussion Paper 014 

Do teacher observations make any difference to pupil performance?

An evaluation published by the Education Endowment Foundation (EEF) has found that introducing more frequent and structured lesson observations – where teachers observe their colleagues and give them feedback – made no difference to pupils’ GCSE maths and English results.

A randomised controlled trial of the whole-school intervention Teacher Observation was conducted in 82 secondary schools in England, which had high proportions of pupils who had ever been eligible for free school meals. In total, the study involved 14,163 pupils – 7,507 pupils (41 schools) in the intervention, and 6,656 pupils (41 schools) in the control.

Maths and English teachers in the intervention schools were asked to take part in at least six structured 20-minute peer observations over a two-year period (with a suggested number of between 12 and 24). Teachers rated each other on specific elements of a lesson, such as how well they managed behaviour, engaged pupils in learning, or used discussion techniques.

The evaluation, which was conducted by a team from the National Foundation for Educational Research (NFER), found that Teacher Observation had no impact on pupils’ GCSE English and maths scores compared to those of pupils in control schools (effect size = -0.01).

Source: Teacher Observation: Evaluation report and executive summary (November 2017), Education Endowment Foundation