Open Court Reading receives judgement in multi-year trial

A multi-year scale-up study has examined the effectiveness of Open Court Reading (OCR), a phonics-based curriculum for grades K-6 (Reception to Year 7).

The study, by Michael Vaden-Kiernar and colleagues, and published in the Journal of Research on Educational Effectiveness, was a school-level, cluster randomised trial, involving around 4,500 pupils and 1,000 teachers in 49 elementary schools across the US.

The OCR curriculum includes pupil materials, teacher manuals, diagnostic and assessment tools and test preparation practice guides. In all grades (K-5), the instructional format is a three-part lesson with specific instruction in phonemic awareness and phonics, vocabulary and comprehension skills and writing skills. The programme includes one- or two-day summer workshops at the start of each school year to train teachers on programme implementation, and ongoing support by OCR reading consultants.

An implementation study showed adequate to high levels of fidelity to the programme. However, there were no statistically significant effects on reading performance in Year 1, and a small negative effect (effect size = -0.09) in Year 2. Relative to the “business-as-usual” controls, no positive overall impacts of OCR and mixed impacts for pupil subgroups were found.

Source: Findings from a multiyear scale-Up effectiveness trial of Open Court Reading (June 2017), Journal of Research on Educational Effectiveness

Evaluation of Success for All

The Education Endowment Foundation recently published a report on the effectiveness trial of the Success for All (SFA) programme to evaluate its impact on the literacy outcomes of Reception pupils. SFA is a whole-school approach to improving literacy in primary schools. All teachers and senior leaders are involved, with the school receiving a total of 16 training and support days. Teachers receive pedagogical training – for example on effective phonics teaching – and teaching materials such as structured lesson plans. For the school leadership team, there is support in areas such as data management, ability grouping and parental engagement.

Fifty-four schools took part in this effectiveness trial, which was evaluated by Sarah Miller and colleagues from Queen’s University Belfast. Although the intervention was delivered on a whole-school basis, the evaluation focused only on the outcomes of 1,767 pupils starting in Reception, and followed them through to the end of Year 1.

The main analysis found that Reception pupils in SFA schools made more progress than pupils in control schools after two years (effect size = +0.07). The effect was slightly larger for pupils eligible for free school meals (FSM), compared to FSM pupils in control schools after two years (effect size = +0.12). In both cases, the effect was smaller than those found in previous evaluations (this is the third RCT of SFA to be conducted, and the first independent trial of the programme in England). Trials in the US reported effect sizes between +0.15 and +0.30. The report suggests that one possible reason for this was that some schools struggled to implement the programme as intended.

The project delivery team was from the University of York. Robert Slavin, director of the US Center for Research and Reform in Education, is Co-founder and Chairman of the Board of the Success for All Foundation.

Source: Success for All: Evaluation report and executive summary (July 2017), Education Endowment Foundation

Talking in class boosts progress in maths, science and English

An intervention that trained teachers to improve and monitor the quality of classroom talk had a positive impact on primary pupils’ test scores in English, maths and science, a report published by the Education Endowment Foundation (EEF) reveals.

Seventy-six primary schools with higher-than-average proportions of disadvantaged pupils took part in a randomised control trial of the Dialogic Teaching intervention, which is designed to improve the quality of classroom talk as a means of increasing pupils’ engagement, learning and achievement. Year 5 teachers in 38 schools (2,493 pupils), and a teacher mentor from each school, received resources and training from the delivery team and then implemented the intervention over the course of the autumn and spring terms in the 2015/16 school year. A control group of 38 schools (2,466 pupils) continued with business as usual. Following the intervention, pupils were tested in English, maths and science.

The results showed that pupils in the intervention schools did better in the main outcome measures of English (effect size = +0.16), science (+0.12), and maths (+0.09) when compared with pupils in the control schools who didn’t receive the intervention. For pupils who received free school meals, the intervention had a higher impact on maths (+0.16), but around the same for English (+0.12) and science (+0.11). Teachers reported positive effects on pupil engagement and confidence, and on the whole the intervention was highly regarded by participating schools. However, some teachers felt that it would take longer than two terms to fully embed a Dialogic Teaching approach in their classrooms.

The Dialogic Teaching intervention was developed by the Cambridge Primary Review Trust and the University of York. This University of York news story has more.

Source: Dialogic teaching: evaluation report and executive summary (July 2017), Education Endowment Foundation

Does playing chess improve maths ability?

An article published in Learning & Behavior examines whether learning to play chess can help improve children’s mathematical ability. To test this hypothesis, Giovanni Sala and Fernand Gobet, from the University of Liverpool, conducted two studies with primary school children in schools in Italy.

The first experiment involved 233 children from eight schools (mean age = 8.5 years). The experimental group (N=53) attended 25 hours of chess lessons during school hours (although not necessarily during maths lessons), along with regular school activities, and were then given a test to assess their mathematical ability and a questionnaire to assess their metacognitive ability. The results were compared to both an active control group (who were similarly taught to play draughts) and a passive control group (who continued with regular school activities). The results showed no significant difference between the three groups in mathematical or metacognitive ability.

For the second experiment, 52 children (mean age = 9.32 years) in three classes of a primary school in Italy participated. Classes were randomly assigned to the three experimental conditions, but this time the active control group learned the game of Go instead of draughts, and both the chess and Go instruction replaced some of the time originally dedicated to learning maths (approximately 15 hours). The results showed no significant effects of learning chess on mathematical ability. Children in the passive control group seemed to benefit slightly more than those learning chess or Go. There was no difference between the three experimental groups on metacognitive ability.

The study concludes that the results of the two experiments do not support the hypothesis that learning chess benefits children’s mathematical ability. The effects of chess, if any, appear to be minimal and too limited to provide any educational advantage over traditional teaching methods.

Source: Does chess instruction improve mathematical problem-solving ability? Two experimental studies with an active control group (June 2017), Learning & Behavior doi:10.3758/s13420-017-0280-3

Science professional development and pupil achievement: A cluster-randomised trial

Joseph Taylor of Abt Associates and colleagues conducted a rigorous study of the Science Teachers Learning Through Lesson Analysis (STeLLA) professional development (PD) programme.

STeLLA is designed to increase elementary (primary) teachers’ science knowledge. Instead of the standard practice of teaching pupils to memorise science concepts and then perform activities that prove these concepts, STeLLA teachers lead pupils to discover science concepts through experience and experimentation. One of STeLLA’s main tenets is to have pupils think through science problems aloud so that teachers can respond to pupils’ ideas and guide them to scientific conclusions and specific learning goals. Its other distinguishing feature is that during the course of a year, groups of 5–10 teachers led by a PD coach watch and critique videos of experienced science teachers’ lessons, later moving on to their own and their colleagues’ lessons, to analyse them regarding science content, teaching and learning. In addition, STeLLA teachers are taught by university-level science teachers the summer prior to implementation to provide them with greater science content knowledge, a process called “content deepening”.

In the current study, researchers used a cluster-randomised design to compare STeLLA to The Content Deepening Program, a PD programme that deepens teachers’ science knowledge through university faculty-led science teaching, like STeLLA does, but without STeLLA’s analysis-of-practice component. Seventy-seven schools, with 144 teachers and 2,823 fourth and fifth grade pupils (Years 5 and 6) in Colorado, were randomly assigned either to STeLLA (n=42 schools) or to The Content Deepening Program (n=35 schools) in two cohorts, the first in 2011–12 and the second in 2012–13. Teachers in both conditions experienced 88 hours of PD and had the same learning goals for their pupils. Pupils were pre- and post-tested on a science measure based on established assessments. Although the control group demonstrated a slight achievement advantage at baseline, results showed that pupils in STeLLA classes scored higher (effect size = +0.55) at post-test than pupils in classes whose teachers had been through The Content Deepening Program.

Source: The effect of an analysis-of-practice, videocase-based, teacher professional development program on elementary students’ science achievement (2017),  Journal of Research on Educational Effectiveness, Volume 10:  Issue 2

Evaluation of “Code Clubs”

The National Foundation for Education Research (NFER) has published the results of a randomised controlled trial and process evaluation of Code Clubs – a UK network of after-school clubs where children aged 9–11 learn to program by making games, animations, websites and applications. Code Club UK produces material and projects that support the teaching of Scratch, HTML/CSS and Python. The clubs, which are supported by volunteers, usually run for one hour a week after school during term time.

The evaluation, conducted by Suzanne Straw and colleagues, assessed the impact of Code Clubs on Year 5 pupils’ computational thinking, programming skills and attitudes towards computers and coding. Twenty-one schools in the UK took part in the trial which used a pupil-randomised design to compare pupil outcomes in the intervention and control groups. Intervention group pupils attended Code Club during the 2015/16 academic year, while control group pupils continued as they would do normally.

The results of the evaluation showed that attending Code Club for a year did not impact on pupils’ computational thinking any more than might have occurred anyway, but did significantly improve their coding skills in Scratch, HTML/CSS and Python. This was true even when control children learned Scratch as part of the computing curriculum in school. Code Club pupils reported increased usage of all three programming languages – and of computers more generally. However, the evaluation data suggests that attending Code Club for a year does not affect how pupils view their abilities in a range of transferable skills, such as following instructions, problem solving, learning about new things and working with others.

Source: Randomised controlled trial and process evaluation of code clubs (March 2017), National Foundation for Educational Research (NFER)