Research syntheses combine the results of all qualifying studies on a specific topic into one overall finding or effect size. When larger studies with more significant effect sizes are published more often than smaller studies with less significant or even null findings, but are of equal study quality, this is referred to as publication bias. The danger of publication bias is that it does not accurately represent all of the research on a given topic, but instead emphasises the most dramatic.
In this month’s Educational Psychology Review, an article by Jason Chow and Erik Eckholm of Virginia Commonwealth University examines the amount of publication bias present in education and special education journals. They examined the differences in mean effect sizes between published and unpublished studies included in meta-analyses (one kind of research synthesis), whether a pattern emerged regarding individual characteristics common in published vs. unpublished studies, and the number of publication bias tests carried out in these meta-analyses.
From 20 journals, 222 meta-analyses met inclusion criteria for the meta-review, with 29 containing enough information to also be eligible for effect size calculations. The researchers found that for the 1,752 studies included in those meta-analyses, published studies had significantly higher effect sizes than the unpublished studies (ES=+0.64), and studies with larger effect sizes were more likely to be published than those with smaller effect sizes. Fifty-eight percent (n=128) of the meta-analyses did not test for publication bias. The authors discuss the implications of these findings.
Source: Do Published Studies Yield Larger Effect Sizes than Unpublished Studies in Education and Special Education? A Meta-review. Educational Psychology Review 30:3, 727–744