Teachers’ use of intervention programmes to support underachieving pupils

A new research report from the RAND Corporation provides insight into teachers’ use of intervention programmes and the factors that may influence that use.

Laura Stelitano and colleagues used data from a sample of 4,402 teachers who indicated on the spring 2019 American Instructional Resources Survey (AIRS) that they teach English and/or maths. The survey asked teachers whether they used intervention programmes to support pupils who are performing below the required level for their year group in their respective subject area, and if so, to select the programmes they use from a list of common interventions.

The report found that, overall, intervention programmes were used less often for maths and in high (secondary) schools. Teachers were more likely to use intervention programmes in English (62%) than in maths (52%). Although high school teachers were least likely to use an intervention programme than elementary (primary) or middle school teachers, 42% of high school teachers reported using a reading or maths intervention. The report also found that teachers’ use of intervention programmes varied depending on the level of school poverty. Teachers in high-poverty schools were more likely than those in lower-poverty schools to use intervention programmes in English. However, the use of maths intervention programmes does not appear to be tied to school poverty levels.

The authors of the report recommend that research could also explore why such a large percentage of teachers are using intervention programmes, the quality of the programmes they are using, and how they are using the interventions to support learning.

Source: Teachers’ use of intervention programs: Who uses them and how context matters (2020), Insights from the American Educator Panels, RAND Corporation, RR-2575/16-BMGF/SFF/OFF

A systematic review of RCTs in education research

The use of randomised controlled trials (RCTs) in education research has increased over the last 15 years. However, the use of RCTs has also been subject to criticism, with four key criticisms being that it is not possible to carry out RCTs in education; the research design of RCTs ignores context and experience; RCTs tend to generate simplistic universal laws of “cause and effect”; and that they are descriptive and contribute little to theory.

To assess these four key criticisms, Paul Connolly and colleagues conducted a systematic review of RCTs in education research between 1980 and 2016 in order to consider the evidence in relation to the use of RCTs in education practice.

The systematic review found a total of 1,017 RCTs completed and reported between 1980 and 2016, of which just over three-quarters have been produced in the last 10 years. Just over half of all RCTs were conducted in North America and just under a third in Europe. This finding addresses the first criticism, and demonstrates that, overall, it is possible to conduct RCTs in education research.

While the researchers also find evidence to oppose the other key criticisms, the review suggests that some progress remains to be made. The article concludes by outlining some key challenges for researchers undertaking RCTs in education.

Source:  The trials of evidence-based practice in education: a systematic review of randomised controlled trials in education research 1980–2016 (July 2018), Educational Research, DOI: 10.1080/00131881.2018.1493353

How research design affects outcomes

As evidence-based reform becomes increasingly important in educational policy, it is becoming essential to understand how research design might contribute to reported effect sizes in experiments evaluating educational programmes. Educational Researcher has recently published an article that examines how methodological features such as types of publication, sample sizes, and research designs affect effect sizes in experiments.

A total of 645 studies from 12 recent reviews of evaluations of reading, mathematics, and science programmes were studied. The findings suggest that effect sizes are roughly twice as large for published articles, small-scale trials, and experimenter-made measures, than for unpublished documents, large-scale studies, and independent measures, respectively. In addition, effect sizes are significantly higher in quasi-experiments than in randomised experiments.

Explanations for the effects of methodological features on effect sizes are discussed, as are implications for evidence-based policy.

Source: How Methodological Features Affect Effect Sizes in Education (2016), Educational Researcher

What works to increase research use?

A new systematic review from the EPPI-Centre at the Institute of Education looks at what works to increase research use by decision-makers. The review included 23 reviews whose relevance and methodological quality were judged appropriate.

There was reliable evidence that the following were effective:

  • Interventions facilitating access to research evidence, for example, through communications strategies and evidence repositories, conditional on the intervention design simultaneously trying to enhance decision-makers’ opportunity and motivation to use evidence.
  • Interventions building decision-makers’ skills to access and make sense of evidence (such as critical appraisal training programmes) conditional on the intervention design simultaneously trying to enhance both capability and motivation to use research evidence.

There was limited evidence that interventions that foster changes to decision-making structures and processes by formalising and embedding one or more of the other mechanisms of change within existing structures and processes (such as evidence-on-demand services integrating push, user-pull, and exchange approaches) enhance evidence use.

There is reliable evidence that some intense and complex interventions lead to an increase in evidence use. Overall though, simpler and more defined interventions appear to have a better likelihood of success.

Source: The Science of Using Science: Researching the Use of Research Evidence in Decision-Making (2016) EPPI-Centre

Eat, sleep, research, repeat

A new study published in Educational Researcher has analysed the amount of replication that takes place in education research.

Repeating a research study helps to increase confidence in its findings. It can help to address some possible criticisms of a single study, such as a bias to publish positive findings, hypothesizing after the results are known, or misuse of data or results.

Researchers looked at the entire publication history of the top 100 education journals (more than 160,000 articles). They found that only 0.13% of journal articles were replications, although that percentage is rising. In 1990 the replication rate was 1 in 2000 articles; now it is around 1 in 500.

The majority of studies (67.4%) successfully repeated the original findings. However, replications were significantly less likely to be successful when there was no overlap in the authors of the original and the repeated study. This may indicate potential bias in replicating one’s own work or, more positively, that the original researchers benefit from the experience of having done the study previously.

Source: Facts Are More Important Than Novelty: Replication in the Education Sciences (2014), Educational Researcher, 43(6).

Success in evidence-based reform: The importance of failure

The latest blog post from Robert Slavin, a Professor in the IEE and director of the Center for Research and Reform in Education, considers the large number of randomised experiments evaluating educational programmes that find few achievement effects. This is a problem that will take on increasing significance as results from the first cohort of the US Investing in Innovation (i3) grants are released.

At the same time, the Education Endowment Foundation in the UK, much like i3, will also begin to report outcomes. It’s possible that the majority of these projects will fail to produce significant positive effects in rigorous, well-conducted evaluations. However, there is much to be learned in the process. For example, the i3 process is producing a great deal of information about what works and what does not, what gets implemented and what does not, and the match between schools’ needs and programmes’ approaches.