Self-explanation may be more effective than presenting pupils with an explanation

Researchers at Simon Fraser University in Canada conducted a meta-analysis on research that investigated learning outcomes for pupils who received self-explanation prompts while studying or solving problems. Self-explanation is a process by which pupils use prior knowledge to make inferences in order to fill in missing information or monitor understanding.

Their study, published in Educational Psychological Review, examined 69 independent effect sizes from 64 studies (5,917 participants). Studies had to include a treatment condition in which learners were directed or prompted to self-explain during a learning task, with a comparison treatment where learners were directed not to self-explain. The measure was a cognitive outcome such as problem solving or comprehension. Learning activities were mostly of short duration (less than an hour) and carried out mostly with undergraduate students.

The analysis found an overall weighted mean effect size of +0.55 on learning outcomes for pupils who were prompted to self-explain compared to those who were not. However, most of the studies were very brief and artificial, so the outcomes cannot be assumed to apply to actual classroom practice. Moderating variables were also examined in order to investigate how learning outcomes varied under a range of conditions, but were found to have no significant difference on effect sizes. The study concludes that having pupils come up with an explanation themselves is often more effective than presenting them with an explanation.

Source: Inducing self-explanation: a meta-analysis (September 2018), Educational Psychological Review, Volume 30, Issue 3

Does mindfulness training work for young people?

A meta-analysis published in Journal of Child Psychology and Psychiatry aims to establish the efficacy of mindfulness-based interventions (MBIs) for children.

Darren Dunning and colleagues carried out a systematic literature search of randomised controlled trials (RCTs) of MBIs conducted up to October 2017. Thirty-three studies (3,666 children, ages 18 years or younger) were included in the meta-analysis, with outcome measures categorised into cognitive, behavioural, and emotional. In addition, a separate meta-analysis was completed for 17 RCTs (1,762 children) that had an active control condition (ie, something else that might be expected to benefit participants, but did not include mindfulness).

Across all RCTs, the researchers found small positive effects of MBIs, compared with control groups, for all measures (overall effect size = +0.19). In particular, MBIs led to greater improvements for mindfulness (effect size = +0.24), executive functions (effect size = +0.30), and attention (effect size = +0.13). However, for the RCTs with active control groups, children who completed an MBI improved significantly more than those in the active control groups on outcomes of mindfulness (effect size = +0.42), depression (effect size = +0.47), and anxiety/stress (effect size = +0.18) only.

Source: Research review: The effects of mindfulness‐based interventions on cognition and mental health in children and adolescents – a meta‐analysis of randomized controlled trials (October 2018), The Journal of Child Psychology and Psychiatry doi:10.1111/jcpp.12980

The advantages of print vs. digital reading: A meta-analysis

A recent meta-analysis showed that paper-based reading yields better outcomes in reading comprehension than digital reading. In an article appearing in Educational Research Review, Pablo Delgato and colleagues from Spain and Israel analysed 54 studies from 2000–2017 comparing the reading comprehension outcomes of comparable paper and digital texts. They examined if one medium has an advantage over the other for reading outcomes, and what factors contribute to any differences found.

Results showed that paper text has an advantage over digital text (effect size=+0.21). Influencing factors favouring paper text include reading under time limitations, text type (informational or informational plus narrative), and publication year—later publications showed increased advantages for paper reading than earlier publications.

While the authors do not advocate getting rid of digital texts given their convenience, cost advantages and pervasiveness, they reflect that these study findings should be considered when pupils are required to perform digitally-related tasks under time constraints.

Source: Don’t throw away your printed books: A meta-analysis on the effects of reading media on reading comprehension (November 29018), Educational Research Review, Volume 25

Effects of shared book reading for young EAL children

A meta-analysis, published in Review of Educational Research, examines how shared book reading affects the English language and literacy skills of young children learning English as an additional language (EAL)

Shared book reading involves an adult reading with one or more children, and is considered to be an effective practice for language and literacy development. It may also involve interactive practices such as dialogic reading techniques to engage children or reinforce specific ideas or words from the text.

For this meta-analysis, Lisa Fitton and colleagues identified 54 studies of shared reading interventions conducted in the US that met their inclusion criteria. The total number of participants across the studies was 3,989, with an average age of six.

Results revealed an overall positive effect of shared reading on EAL outcomes (effect size = +0.28). Children’s developmental status moderated this effect, with larger effect sizes found in studies including only typically developing children (+0.48) than in studies including only participants with developmental disorders (+0.17).

Source:  Shared book reading interventions with English learners: a meta-analysis (October 2018), Review of Educational Research, volume: 88 issue: 5

How significant is publication bias in educational research?

Research syntheses combine the results of all qualifying studies on a specific topic into one overall finding or effect size. When larger studies with more significant effect sizes are published more often than smaller studies with less significant or even null findings, but are of equal study quality, this is referred to as publication bias. The danger of publication bias is that it does not accurately represent all of the research on a given topic, but instead emphasises the most dramatic.

In this month’s Educational Psychology Review, an article by Jason Chow and Erik Eckholm of Virginia Commonwealth University examines the amount of publication bias present in education and special education journals. They examined the differences in mean effect sizes between published and unpublished studies included in meta-analyses (one kind of research synthesis), whether a pattern emerged regarding individual characteristics common in published vs. unpublished studies, and the number of publication bias tests carried out in these meta-analyses.

From 20 journals, 222 meta-analyses met inclusion criteria for the meta-review, with 29 containing enough information to also be eligible for effect size calculations. The researchers found that for the 1,752 studies included in those meta-analyses, published studies had significantly higher effect sizes than the unpublished studies (ES=+0.64), and studies with larger effect sizes were more likely to be published than those with smaller effect sizes. Fifty-eight percent (n=128) of the meta-analyses did not test for publication bias. The authors discuss the implications of these findings.

Source: Do Published Studies Yield Larger Effect Sizes than Unpublished Studies in Education and Special Education? A Meta-review. Educational Psychology Review 30:3, 727–744

The effect of teacher coaching on teaching and learning

Matthew A Kraft and colleagues conducted a meta-analysis of the causal evidence on the effect of teacher coaching on teaching and learning. Their paper, published in the Review of Educational Research, reviewed 60 studies on teacher coaching programmes conducted after 2006 that measured the impact of teacher coaching on either teaching (measured using tools such as the Classroom Assessment Scoring System or the Early Language and Literacy Classroom Observation) or pupil academic performance (measured by standardised tests).

Their results found that sustained coaching improves both classroom teaching and pupil achievement, with pooled effect sizes of +0.49 standard deviations for teaching and +0.18 standard deviations for academic achievement.

However, the effectiveness of a teacher coaching programme seems to be determined by the number of participants. When studies were divided into programmes that had fewer than 100 participants and those that had more than 100 participants, the impact on teaching was nearly double for the smaller programmes than for programmes with more than 100 participants. For pupil achievement, the smaller programmes showed an impact of nearly three times that of the larger programmes.

Source:  The effect of teacher coaching on instruction and achievement: A meta-analysis of the causal evidence (February 2018), Review of Educational Research, Vol 88, Issue 4