The advantages of print vs. digital reading: A meta-analysis

A recent meta-analysis showed that paper-based reading yields better outcomes in reading comprehension than digital reading. In an article appearing in Educational Research Review, Pablo Delgato and colleagues from Spain and Israel analysed 54 studies from 2000–2017 comparing the reading comprehension outcomes of comparable paper and digital texts. They examined if one medium has an advantage over the other for reading outcomes, and what factors contribute to any differences found.

Results showed that paper text has an advantage over digital text (effect size=+0.21). Influencing factors favouring paper text include reading under time limitations, text type (informational or informational plus narrative), and publication year—later publications showed increased advantages for paper reading than earlier publications.

While the authors do not advocate getting rid of digital texts given their convenience, cost advantages and pervasiveness, they reflect that these study findings should be considered when pupils are required to perform digitally-related tasks under time constraints.

Source: Don’t throw away your printed books: A meta-analysis on the effects of reading media on reading comprehension (November 29018), Educational Research Review, Volume 25

Effects of shared book reading for young EAL children

A meta-analysis, published in Review of Educational Research, examines how shared book reading affects the English language and literacy skills of young children learning English as an additional language (EAL)

Shared book reading involves an adult reading with one or more children, and is considered to be an effective practice for language and literacy development. It may also involve interactive practices such as dialogic reading techniques to engage children or reinforce specific ideas or words from the text.

For this meta-analysis, Lisa Fitton and colleagues identified 54 studies of shared reading interventions conducted in the US that met their inclusion criteria. The total number of participants across the studies was 3,989, with an average age of six.

Results revealed an overall positive effect of shared reading on EAL outcomes (effect size = +0.28). Children’s developmental status moderated this effect, with larger effect sizes found in studies including only typically developing children (+0.48) than in studies including only participants with developmental disorders (+0.17).

Source:  Shared book reading interventions with English learners: a meta-analysis (October 2018), Review of Educational Research, volume: 88 issue: 5

How significant is publication bias in educational research?

Research syntheses combine the results of all qualifying studies on a specific topic into one overall finding or effect size. When larger studies with more significant effect sizes are published more often than smaller studies with less significant or even null findings, but are of equal study quality, this is referred to as publication bias. The danger of publication bias is that it does not accurately represent all of the research on a given topic, but instead emphasises the most dramatic.

In this month’s Educational Psychology Review, an article by Jason Chow and Erik Eckholm of Virginia Commonwealth University examines the amount of publication bias present in education and special education journals. They examined the differences in mean effect sizes between published and unpublished studies included in meta-analyses (one kind of research synthesis), whether a pattern emerged regarding individual characteristics common in published vs. unpublished studies, and the number of publication bias tests carried out in these meta-analyses.

From 20 journals, 222 meta-analyses met inclusion criteria for the meta-review, with 29 containing enough information to also be eligible for effect size calculations. The researchers found that for the 1,752 studies included in those meta-analyses, published studies had significantly higher effect sizes than the unpublished studies (ES=+0.64), and studies with larger effect sizes were more likely to be published than those with smaller effect sizes. Fifty-eight percent (n=128) of the meta-analyses did not test for publication bias. The authors discuss the implications of these findings.

Source: Do Published Studies Yield Larger Effect Sizes than Unpublished Studies in Education and Special Education? A Meta-review. Educational Psychology Review 30:3, 727–744

The effect of teacher coaching on teaching and learning

Matthew A Kraft and colleagues conducted a meta-analysis of the causal evidence on the effect of teacher coaching on teaching and learning. Their paper, published in the Review of Educational Research, reviewed 60 studies on teacher coaching programmes conducted after 2006 that measured the impact of teacher coaching on either teaching (measured using tools such as the Classroom Assessment Scoring System or the Early Language and Literacy Classroom Observation) or pupil academic performance (measured by standardised tests).

Their results found that sustained coaching improves both classroom teaching and pupil achievement, with pooled effect sizes of +0.49 standard deviations for teaching and +0.18 standard deviations for academic achievement.

However, the effectiveness of a teacher coaching programme seems to be determined by the number of participants. When studies were divided into programmes that had fewer than 100 participants and those that had more than 100 participants, the impact on teaching was nearly double for the smaller programmes than for programmes with more than 100 participants. For pupil achievement, the smaller programmes showed an impact of nearly three times that of the larger programmes.

Source:  The effect of teacher coaching on instruction and achievement: A meta-analysis of the causal evidence (February 2018), Review of Educational Research, Vol 88, Issue 4

The impact of professional development in early childhood education

Franziska Egert and colleagues in Germany and Amsterdam have conducted a review of the effects of professional development (PD) for early childhood educators on programme quality and children’s educational outcomes.

Studies were only included if they addressed quality of child care or child development, included early childhood teachers (including preschool, kindergarten and centre-based care), were quantitative, were experimental or quasi-experimental, reported effect sizes or data and addressed children 0–7 years old. This yielded 36 studies of 42 programmes evaluating quality ratings, and nine studies of 10 programmes evaluating both quality ratings and pupil outcomes.

Results showed that professional development improved the external quality ratings (as evaluated using the Classroom Assessment Scoring System, Early Language and Literacy Classroom Observation, Environmental Rating Scales and Individualized Classroom Assessment Scoring System) of early childhood education (effect size=+0.68), with programmes providing 45–60 PD hours having the greatest impact on classroom practice as compared to programmes offering fewer or more hours. This was true regardless of whether teachers held a university degree or not. Further, programmes that solely used coaching were almost three times as effective as other programmes. A second meta-analysis of a subset of studies (n=486 teachers, 4,504 children) showed that improvement in the quality of early childhood education programmes was correlated with improvements in child development (effect size=+0.14) as determined by language and literacy scores, maths scores, social-behavioural ratings, and assessment of cognition, knowledge and school readiness.

Source: Impact of in-service professional development programs for early childhood teachers on quality ratings and child outcomes: a meta-analysis (January 2018), Review of Educational Research, Vol 88, Issue 3

How much does education improve intelligence?

A meta-analysis published in Psychological Science looks at how much education improves intelligence, and suggests that a year of school improves pupils’ IQ scores by between one and five points.

Stuart J Ritchie and colleagues looked at three particular types of quasi-experimental studies of educational effects on intelligence:

  1. Those estimating education-intelligence associations after controlling for earlier intelligence.
  2. Those using policy changes that result in individuals staying in schools for different lengths of time.
  3. Those using school-entry age cut-offs to compare children who are similar in age but who have different levels of schooling as a result of their specific birth dates.

Their meta-analysis comprised 142 effect sizes from 42 data sets involving over 600,000 participants. All three study designs showed consistent evidence that the length of time spent in school is associated with increased intelligence test scores (an average effect of +3.4 IQ points for one additional year of education). The third study design, age cut-off, had the largest effect size (+5.2 IQ points). The first study design showed the lowest effect (+1.2 IQ points). For policy change, the effect size was 2.1 IQ points.

Source: How much does education improve intelligence? a meta-analysis (June 2018), Psychological Science DOI: 10.1177/0956797618774253