The effect of linguistic comprehension training on language and reading comprehension

Kristin Rogde and colleagues from the Campbell Collaboration have completed a systematic review that examines the effects of linguistic comprehension teaching on generalised measures of language and reading comprehension skills. Examples of linguistic comprehension skills include vocabulary, grammar and narrative skills.

The authors searched literature dating back to 1986, and identified 43 studies to include in the review, including samples of both pre-school and school-aged participants. Randomised controlled trials and quasi-experiments with a control group and a pre-post design were included.

Key findings of the review were as follows:

  • The linguistic comprehension programmes included in the review display a small positive immediate effect on generalised outcomes of linguistic comprehension.
  • The effect of the programmes on generalised measures of reading comprehension is negligible.
  • Few studies report follow-up assessment of their participants.

According to the authors, linguistic comprehension teaching has the potential to increase children’s general linguistic comprehension skills. However, there is variability in effects related to the type of outcome measure that is used to examine the effect of such instruction on linguistic comprehension skills.

Source: The effect of linguistic comprehension instruction on generalized language and reading comprehension skills: A systematic review (November 2019), Campbell Systematic Reviews

The impact of shared book reading on children’s language skills

This meta-analysis, published in Educational Research Review, explores whether shared reading interventions are equally effective across a range of study designs, across a range of different outcome variables, and for children from different socioeconomic status (SES) groups.

Studies were included in the meta-analysis if they met the following criteria:

  • Must contain a universal and/or targeted shared book reading intervention.
  • Must include at least one control group.
  • Participants must be typically developing children ages seven years or younger.
  • Must not target multilingual populations and/or the acquisition of an additional language.
  • Must isolate the variable of interest (shared book reading).
  • Must report on objective quantitative measure of language ability.
  • Must provide sufficient data to calculate the effect size.

The results suggest that shared reading had an overall effect size of +0.19 on children’s language development. They also show that this effect was moderated by the type of control group used and was near zero in studies with active control groups (ES = +0.03). The meta-analysis also shows no differences across outcome variables or for SES.

Source: The impact of shared book reading on children’s language skills: A meta-analysis (September 2019), Educational Research Review, Volume 28

Improving the maths and reading skills of children in foster care

A study published in Oxford Review of Education evaluates the effects of TutorBright tutoring on the reading and maths skills of children in family foster care. TutorBright uses one-to-one, at-home tutoring with detailed instructor’s manuals and customised pupil workbooks. Children receive two one-hour tutoring sessions per week, on designated days of the week, for up to 50 hours of tutoring. Children in the waiting list control group were asked to continue with their schooling as usual and not seek additional tutoring or academic support during the school year, and were then offered the tutoring intervention at the end of the school year. TutorBright tutors all had experience with teaching or mentoring, and an undergraduate or master’s degree (completed or in progress).

For the randomised controlled trial, conducted by Andrea J Hickey and Robert J Flynn, child welfare workers nominated foster care children in Ontario, Canada, who met the following criteria: enrolled in grades 1–11 (Years 2–12), fluent in English, currently living in a foster-family setting, and judged likely to remain in care for the duration of the study. Thirty-four children were randomly assigned to tutoring, and 36 to a waiting-list control condition.

The results suggest that the tutored children made greater gains than those in the control group in reading fluency (effect size = +0.16), reading comprehension (ES = +0.34) and maths calculation (ES = +0.39).

Source: Effects of the TutorBright tutoring programme on the reading and mathematics skills of children in foster care: a randomised controlled trial (July 2019), Oxford Review of Education, 45:4

Results of an early literacy intervention to improve reading outcomes

Evidence for Learning in Australia has published an evaluation report of a randomised controlled trial of MiniLit, a small group, phonics-based programme for struggling Year 1 readers. The intervention is targeted at the bottom 25% of pupils struggling to read, and focuses on improving pupils’ literacy in five areas: phoneme awareness, phonics, fluency, vocabulary and comprehension.

The programme involved struggling readers from Year 1 classes in nine Australian primary schools located in New South Wales, and consisted of 80 one-hour lessons delivered four to five days per week over 20 weeks. The lessons were delivered in school outside of regular lessons by teachers to small groups of up to four pupils. A total of 237 pupils participated, of which 119 were allocated to the MiniLit intervention group and 118 to the control group. Pupils in the control group received the school’s usual learning support for struggling readers, which could include whole-class approaches and/or support programmes for struggling readers.

Overall, there was no evidence that MiniLit had any additional impact on pupils’ reading at 12 months, measured using the York Assessment of Reading Comprehension – Passage Reading (YARC-PR) tests compared to pupils receiving usual reading support (ES = -0.04). However, there were some positive effects for reading accuracy (ES = +0.13) and reading rate (ES = +0.06). There was also evidence of improvement in foundational reading skills at six months, particularly letter sound knowledge, which was also sustained at 12 months.

The researchers point out, however, that the findings were dependent on the quality of the MiniLit lessons which were provided to pupils. Schools were limited to 20 weeks’ duration, and in many cases, teachers reported that this length was not sufficient to complete the programme for all groups. They suggest that improving how MiniLit is implemented may lead to more positive outcomes; however, this requires further evaluation to determine.

Source: MiniLit: Learning impact fund: Evaluation report (2019). Independent report prepared by the Murdoch Children’s Research Institute and the University of Melbourne for Evidence for Learning

Printed vs digital text: A meta-analysis

A meta-analysis in the Journal of Research in Reading has synthesised the findings of studies comparing print and digital text regarding time required to read, reading comprehension and readers’ perceptions of their comprehension. Researcher Virginia Clinton performed a systematic literature review, only including studies using random assignment and that were published between 2008 and 2018, yielding 29 reports of 33 studies for analysis. She found that readers require equal amounts of time to read print and digital text, although screen reading negatively impacted reading comprehension (effect size = -0.25). Readers were more accurately able to judge their comprehension on paper (effect size = +0.20) than on screen.

The negative effect on performance for reading text from screens rather than paper did not vary for readers who were adults or children (under 18). However, the author suggests this finding should be interpreted with caution because there were more studies with adult participants (26) than child participants (7).

Best Evidence in Brief reported on an earlier meta-analysis solely examining reading comprehension, whose results also favoured printed text.

Source: Reading from paper compared to screens: A systematic review and meta‐analysis (May 2019), Journal of Research in Reading, volume 42, issue 2

Small class size vs. evidence-based interventions

The Ministry of Education in France introduced a policy in 2002 that reduced class size to no more than 12 pupils in areas determined to have social difficulties and high proportions of at-risk pupils, called Zones d’Education Prioritaire (ZEP). In order to evaluate the effectiveness and usefulness of this policy, researcher Jean Ecalle and colleagues in France examined the results of the policy-mandated class size reduction on the reading achievement of first grade (Year 2) pupils (Study 1), and compared them to the effects of an evidence-based literacy intervention on the reading achievement of at-risk children in normal-sized classes (20 pupils) (Study 2).

Study 1, reducing class size, involved assigning classrooms to either small (12 pupils/class n=100 classes) or large (20–25 pupils/class, n=100 classes) class sizes (with the support of the Ministry). At the start of the 2002–03 school year, 1,095 children were pre-tested on pre-reading skills and matched at pre-test. At the end of the school year, children were post-tested, with results favouring the small-class-size group on word reading (effect size=+0.14) and word spelling (effect size=+0.22).

In Study 2, researchers separated 2,803 first grade (Year 2) pupils in ZEP areas into an experimental group who received an evidence-based reading intervention, and a control group who did not. The intervention was a protocol developed by the Association Agir pour l’Ecole (Act for School), who developed a hierarchy of teaching reading based on evidence-based methods of learning to read, progressing from training phonological skills, to learning letter sounds, decoding, and fluency. Act for School monitored compliance with the protocol weekly. Class size for both groups was 20 pupils. Experimental teachers received one day of training, and provided 30 minutes of teaching a day to average or high readers in groups of 10 to 12, and one hour a day for lower readers in groups of four to six. Again, children were pre-tested on reading skills and matched between groups. All areas post-tested favoured the experimental group, with significant effects in word reading (effect size=+0.13) and word spelling (effect size=+0.12).

Researchers stated that based on the results of both studies, the optimal recommendation to improve literacy skills for at-risk pupils would be a double intervention, combining evidence-based practices within small classes.

Source: Effects of policy and educational interventions intended to reduce difficulties in literacy skills in grade 1 (June 2019), Studies in Educational Evaluation, Volume 61