A systematic review of RCTs in education research

The use of randomised controlled trials (RCTs) in education research has increased over the last 15 years. However, the use of RCTs has also been subject to criticism, with four key criticisms being that it is not possible to carry out RCTs in education; the research design of RCTs ignores context and experience; RCTs tend to generate simplistic universal laws of “cause and effect”; and that they are descriptive and contribute little to theory.

To assess these four key criticisms, Paul Connolly and colleagues conducted a systematic review of RCTs in education research between 1980 and 2016 in order to consider the evidence in relation to the use of RCTs in education practice.

The systematic review found a total of 1,017 RCTs completed and reported between 1980 and 2016, of which just over three-quarters have been produced in the last 10 years. Just over half of all RCTs were conducted in North America and just under a third in Europe. This finding addresses the first criticism, and demonstrates that, overall, it is possible to conduct RCTs in education research.

While the researchers also find evidence to oppose the other key criticisms, the review suggests that some progress remains to be made. The article concludes by outlining some key challenges for researchers undertaking RCTs in education.

Source:  The trials of evidence-based practice in education: a systematic review of randomised controlled trials in education research 1980–2016 (July 2018), Educational Research, DOI: 10.1080/00131881.2018.1493353

How research design affects outcomes

As evidence-based reform becomes increasingly important in educational policy, it is becoming essential to understand how research design might contribute to reported effect sizes in experiments evaluating educational programmes. Educational Researcher has recently published an article that examines how methodological features such as types of publication, sample sizes, and research designs affect effect sizes in experiments.

A total of 645 studies from 12 recent reviews of evaluations of reading, mathematics, and science programmes were studied. The findings suggest that effect sizes are roughly twice as large for published articles, small-scale trials, and experimenter-made measures, than for unpublished documents, large-scale studies, and independent measures, respectively. In addition, effect sizes are significantly higher in quasi-experiments than in randomised experiments.

Explanations for the effects of methodological features on effect sizes are discussed, as are implications for evidence-based policy.

Source: How Methodological Features Affect Effect Sizes in Education (2016), Educational Researcher

What works to increase research use?

A new systematic review from the EPPI-Centre at the Institute of Education looks at what works to increase research use by decision-makers. The review included 23 reviews whose relevance and methodological quality were judged appropriate.

There was reliable evidence that the following were effective:

  • Interventions facilitating access to research evidence, for example, through communications strategies and evidence repositories, conditional on the intervention design simultaneously trying to enhance decision-makers’ opportunity and motivation to use evidence.
  • Interventions building decision-makers’ skills to access and make sense of evidence (such as critical appraisal training programmes) conditional on the intervention design simultaneously trying to enhance both capability and motivation to use research evidence.

There was limited evidence that interventions that foster changes to decision-making structures and processes by formalising and embedding one or more of the other mechanisms of change within existing structures and processes (such as evidence-on-demand services integrating push, user-pull, and exchange approaches) enhance evidence use.

There is reliable evidence that some intense and complex interventions lead to an increase in evidence use. Overall though, simpler and more defined interventions appear to have a better likelihood of success.

Source: The Science of Using Science: Researching the Use of Research Evidence in Decision-Making (2016) EPPI-Centre

Eat, sleep, research, repeat

A new study published in Educational Researcher has analysed the amount of replication that takes place in education research.

Repeating a research study helps to increase confidence in its findings. It can help to address some possible criticisms of a single study, such as a bias to publish positive findings, hypothesizing after the results are known, or misuse of data or results.

Researchers looked at the entire publication history of the top 100 education journals (more than 160,000 articles). They found that only 0.13% of journal articles were replications, although that percentage is rising. In 1990 the replication rate was 1 in 2000 articles; now it is around 1 in 500.

The majority of studies (67.4%) successfully repeated the original findings. However, replications were significantly less likely to be successful when there was no overlap in the authors of the original and the repeated study. This may indicate potential bias in replicating one’s own work or, more positively, that the original researchers benefit from the experience of having done the study previously.

Source: Facts Are More Important Than Novelty: Replication in the Education Sciences (2014), Educational Researcher, 43(6).

Success in evidence-based reform: The importance of failure

The latest blog post from Robert Slavin, a Professor in the IEE and director of the Center for Research and Reform in Education, considers the large number of randomised experiments evaluating educational programmes that find few achievement effects. This is a problem that will take on increasing significance as results from the first cohort of the US Investing in Innovation (i3) grants are released.

At the same time, the Education Endowment Foundation in the UK, much like i3, will also begin to report outcomes. It’s possible that the majority of these projects will fail to produce significant positive effects in rigorous, well-conducted evaluations. However, there is much to be learned in the process. For example, the i3 process is producing a great deal of information about what works and what does not, what gets implemented and what does not, and the match between schools’ needs and programmes’ approaches.

Is education research looking at the right issues?

A new report published by the Aspen Institute considers how US federal policy influences education research. The report includes a useful summary of the way that the federal government funds education research through a plethora of agencies. This is followed by a series of essays looking at ways in which this might be improved in the future. In his essay, Robert Slavin suggests some potential new directions for education research. “Research, evaluation, and dissemination of effective approaches should be the cornerstone of progress in America’s elementary and secondary schools,” he says.

Source: Leveraging Learning: The Evolving Role of Federal Policy in Education Research (2013), The Aspen Institute.