Effect of preschool home visiting on school readiness

A study published in JAMA Pediatrics examines the sustained effects of a preschool home visiting programme on child outcomes in third grade (Year 4). Karen Bierman and colleagues conducted a randomised controlled trial of the Research-Based and Developmentally Informed Parent home visiting program (REDI-P) on 200 families with preschool children recruited from 24 Head Start centres in Pennsylvania.

Families were assigned to either receive the REDI-P intervention or be sent maths learning games in the post (control group). The intervention focused on improving academic performance and social-emotional adjustment, and reducing children’s problems at home. Families received ten visits from home visitors during preschool and six follow-up visits in kindergarten. Parents received coaching to enhance parent–child relationships and home learning materials to support children’s development and school readiness.

Overall, REDI-P produced sustained benefits four years after the intervention, with children in the REDI-P intervention group needing and using fewer school services than children in the control group. Results showed improvements in academic performance in third grade, measured by direct assessments of child sight-word reading fluency (effect size = +0.28) and teacher-rated academic performance in third grade (effect size= +0.29). The intervention also promoted sustained improvements in children’s social-emotional adjustment, reflected in direct assessments of social understanding (effect size = +0.31). REDI-P also produced reductions in the home problems that parents reported (effect size= −0.28).

Source: Effect of Preschool Home Visiting on School Readiness and Need for Services in Elementary School: A Randomized Clinical TrialJAMA Pediatr. 2018;172(8):e181029.

Professional development and early childhood education and care

A meta-analysis published in Review of Educational Research summarises findings from studies that evaluated the effects of in-service training for early childhood teachers on the quality of early childhood education and care (ECEC) and child outcomes. Overall, data from 36 studies with 2,891 teachers was included in the analysis. For studies to qualify, child care quality had to be measured externally with certified raters at the classroom level.

The analysis, carried out by Franziska Egert and colleagues, revealed that at the teacher level, in-service training had a positive effect on the quality of ECEC, with an effect size of +0.68. Furthermore, a subset of nine studies (including 486 teachers and 4,504 children) that provided data on both quality ratings and child development were analysed, and they showed a small effect at the child level (effect size = + 0.14) and a medium effect at the corresponding classroom level (effect size = +0.45).

Source: Impact of In-Service Professional Development Programs for Early Childhood Teachers on Quality Ratings and Child Outcomes: A Meta-Analysis, Review of Educational Research, 88:3 401 – 433.

How significant is publication bias in educational research?

Research syntheses combine the results of all qualifying studies on a specific topic into one overall finding or effect size. When larger studies with more significant effect sizes are published more often than smaller studies with less significant or even null findings, but are of equal study quality, this is referred to as publication bias. The danger of publication bias is that it does not accurately represent all of the research on a given topic, but instead emphasises the most dramatic.

In this month’s Educational Psychology Review, an article by Jason Chow and Erik Eckholm of Virginia Commonwealth University examines the amount of publication bias present in education and special education journals. They examined the differences in mean effect sizes between published and unpublished studies included in meta-analyses (one kind of research synthesis), whether a pattern emerged regarding individual characteristics common in published vs. unpublished studies, and the number of publication bias tests carried out in these meta-analyses.

From 20 journals, 222 meta-analyses met inclusion criteria for the meta-review, with 29 containing enough information to also be eligible for effect size calculations. The researchers found that for the 1,752 studies included in those meta-analyses, published studies had significantly higher effect sizes than the unpublished studies (ES=+0.64), and studies with larger effect sizes were more likely to be published than those with smaller effect sizes. Fifty-eight percent (n=128) of the meta-analyses did not test for publication bias. The authors discuss the implications of these findings.

Source: Do Published Studies Yield Larger Effect Sizes than Unpublished Studies in Education and Special Education? A Meta-review. Educational Psychology Review 30:3, 727–744