has long been the standard measurement used in educational research. This
commonality allows for comparison across studies, between programmes, and so
on. It’s a tricky statistic, though, because its implications are not
necessarily understood by the typical consumer of research. For example, saying
that a programme has an effect size of +0.13 is likely to be less meaningful to
the layperson than saying that a programme yielded a gain of one month’s
effort to make effect sizes more reader-friendly, writers sometimes translate
effect sizes into terms easier to understand, most often into units of time,
such as days/years of learning. Yet research statisticians warn that what is
gained in understandability may be lost in accuracy.
In an article appearing on Educational Researcher’s Online First site, RAND’s Matthew Baird and John Pane compared the “years of time” translation to three other reader-friendly measures, rating which were most and least accurate reflections of effect sizes: to benchmarking against similar groups in other studies, to percentile growth, and to determining the probability of meeting a certain threshold. Specifically, Baird and Paine used data from a 2017 evaluation of personalised learning that reported detailed assessment procedures, data structure and methods of analysis, applying this information to calculate whether each reader-friendly term incorporated six properties they deemed necessary to promote accuracy between the effect size reported and the more reader-friendly terms in which it was stated.
Results showed that the units of time translation was in fact the least accurate, while the percentile gains option yielded the best results.
Source: Translating standardized effects of education programs into more interpretable metrics (2019), Educational Researcher DOI: 10.3102/0013189X19848729
This study is discussed further in this blogpost by Robert Slavin.
A research briefing published by the Education Endowment Foundation (EEF) looks at what progress has been made in embedding evidence-informed practice within teaching in England.
As part of
the brief, researchers from the National Foundation for Educational Research
(NFER) summarised findings from a nationally representative survey of 1,670
schools and teachers. The survey was conducted between September and November
2017, and investigated teachers’ research use. The results of the survey
Research evidence continues to play
a relatively small role in influencing teachers’ decision-making. Eighty-four percent of those
surveyed said that their continuing professional development was based on
information other than academic research.
Most teachers report that their
schools offer supporting environments, which enablesevidence-informed practice to flourish. Seventy-three percent
‘agreed’ or ‘strongly agreed’ that their school provided a positive culture for
professional development and evidence use.
Teachers report generally positive attitudes towards research evidence, despite the fact that research
evidence had only a small influence on their decision-making.
responses varied by school phase, by type of respondent, and by type of schools.
Those who were more likely to report that their schools had a positive research
culture, and that they used research to inform their selection of teaching
Senior leaders (as opposed to classroom teachers).
Primary school teachers (rather than secondary school teachers).
Schools with the lowest 25 percent of achievement (versus highest 25 percent achievement).
Source: Teachers’ engagement with
research: what do we know? A research briefing (May 2019), Education Endowment Foundation
The Education Endowment Foundation has published an evaluation of Research Leads Improving Students’ Education (RISE). The programme, which was developed and delivered by Huntington School in York, aimed to improve the maths and English achievement of pupils in secondary school using a research-informed school improvement model.
Forty schools took part in the randomised controlled trial and were randomly allocated to either take part in RISE or to a control group which continued with business as usual. Schools participating in RISE appointed a senior teacher as a Research Lead who was responsible for promoting and supporting the use of research throughout the school. Support for Research Leads included an initial eight professional development sessions held over eight months, occasional follow-up meetings over two academic years, a customised email newsletter, a website with resources, a peer network, and school visits by the RISE team. The RISE team also provided a workshop for headteachers and annual workshops for English and maths subject leads.
The evaluation examined the impact on pupils in two cohorts:
in the first cohort (A) the school was only exposed to one year of RISE, while
in the second cohort (B) the school experienced two years of the intervention. For
both the one-year and two-year cohorts, children in RISE schools made a small
amount of additional progress in maths (effect size = +0.09 for cohort A and
+0.04 for cohort B) and English (effect size = +0.05 for cohort A and +0.03 for
cohort B) compared to children in the control-group
schools. However, the differences were small and not significant, so the
evaluation concludes that there is no evidence that participating in one or two
years of the RISE programme has a positive impact on pupil achievement.
In addition, the evaluation highlights the importance of
schools’ ability and motivation to make use of the Research Lead in shaping
school improvement decisions and processes. For example, it suggests that
implementation was stronger when headteachers gave clear and visible support
for the project and Research Leads had additional dedicated time to undertake
Source: The RISE
project: Evidence-informed school improvement (May 2019), Education Endowment Foundation
In a review of important 2017 releases, MDRC recently referenced a memo to policymakers with recommendations for increasing research use and applying evidence to all policy decisions, both educational and otherwise.
Programmes and policies should be independently evaluated. To ensure high-quality evaluations, they should be directly relevant to policy, free of political or other influences and credible to subjects and consumers.
The government should provide incentives for programmes to apply evidence results to improve their performance.
Utilise a tiered evidence strategy, such as is used in the Every Student Succeeds Act, to set clear guidelines for standards.
Existing funding sources should be applied to generate evidence. A 1% set-aside was recommended.
Federal and state agencies should be allowed to access and share their data for evaluation purposes.
Source: Putting evidence at the heart of making policy (February 2017), MDRC
A new report from Child Trends reviews the literature on conditions under which US policy-makers are most likely to use research, including the presentation formats that best facilitate their use. The authors, Elizabeth Jordan and P Mae Cooper, offer several insights based on their review of the evidence, including:
Policy-makers prefer a personal connection or conversation to a written report. One reason the authors cite is that reports are undigested information, meaning they require some expertise to pull out the information that is most relevant to the situation at hand.
While personal connections are usually best, no legislator can build and maintain relationships with experts in every field. The authors say that usually it is legislative staffers who fill this gap. Reports that summarise findings from a body of research are particularly useful to staffers, as they cover a variety of topics at one time.
For research to be useful to policy-makers and their staff, it must be relevant. The authors note that the information must relate to current policy debates, show an impact on “real people”, present information that is useful across states or localities, and be easy to read.
There are some formatting decisions that can help improve a written report’s accessibility. The authors suggest bulleted lists, highlighted text, charts, and graphs to help a policy-maker or staffer quickly absorb the main points of the research.
The report also provides several real-life examples of how research has informed public policy. For instance, the authors describe how rigorous evidence of the short- and long-term positive outcomes for children and families who participated in early childhood home visiting led the Obama Administration to create a new federal home visiting programme.
Source: Building bridges: How to share research about children and youth with policymakers (2016), Child Trends
In his Huffington Post blog, Robert Slavin, director of the Center for Research and Reform in Education, discusses a study that evaluated a behaviour management programme, First Step to Success, for students with behaviour problems. The programme has been evaluated successfully many times. In this latest study, 200 children in grades 1 to 3 (Years 2 to 4) with serious behaviour problems were randomly assigned to experimental or control groups. On behaviour and achievement measures, students in the experimental group scored much higher, with effect sizes of +0.44 to +0.87.
The researchers came back a year later to see if the outcomes had been maintained. Despite the substantial impacts seen previously, none of three prosocial/adaptive behaviour measures, only one of three problem/maladaptive behaviours, and none of four academic achievement measures now showed positive outcomes. However, the students had passed from teachers who had been trained in the First Step method to teachers who had not.
Dr Slavin says, “Imagine that all teachers in the school learned the program and all continued to implement it for many years. In this circumstance, it would be highly likely that the first-year positive impacts would be sustained and most likely improved over time.” He discusses the implications of the research, and the importance of continuing with successful interventions.
Source: Keep Up the Good Work (To Keep Up the Good Outcomes) (2016), Huffington Post