An evaluation published by the Education Endowment Foundation (EEF) has found that introducing more frequent and structured lesson observations – where teachers observe their colleagues and give them feedback – made no difference to pupils’ GCSE maths and English results.
A randomised controlled trial of the whole-school intervention Teacher Observation was conducted in 82 secondary schools in England, which had high proportions of pupils who had ever been eligible for free school meals. In total, the study involved 14,163 pupils – 7,507 pupils (41 schools) in the intervention, and 6,656 pupils (41 schools) in the control.
Maths and English teachers in the intervention schools were asked to take part in at least six structured 20-minute peer observations over a two-year period (with a suggested number of between 12 and 24). Teachers rated each other on specific elements of a lesson, such as how well they managed behaviour, engaged pupils in learning, or used discussion techniques.
The evaluation, which was conducted by a team from the National Foundation for Educational Research (NFER), found that Teacher Observation had no impact on pupils’ GCSE English and maths scores compared to those of pupils in control schools (effect size = -0.01).
Source: Teacher Observation: Evaluation report and executive summary (November 2017), Education Endowment Foundation
A study published by the Institute of Education Sciences in the US evaluates the impact of the Retired Mentors for New Teachers programme – a two-year programme in which recently retired teachers provide tailored mentoring to new teachers – on pupil achievement, teacher retention and teacher evaluation ratings. The new teachers meet with their mentors weekly on a one-to-one basis and monthly in school-level groups over the course of the two years.
Dale DeDesare and colleagues conducted a randomised controlled trial involving 77 teachers at 11 primary schools in Aurora, Colorado. Within each school, half of the new teachers were randomly assigned to a control group to receive the district’s business-as-usual mentoring support, while the other half received the intervention as well as business-as-usual mentoring support.
The study found that at the end of the first year, pupils who were taught by teachers in the programme group scored 1.4 points higher on the spring Measures of Academic Progress maths assessment than those taught by teachers in the control group, (effect size = +0.064), and this difference was statistically significant. Reading achievement was also higher among pupils taught by teachers in the programme group, however, the difference was not statistically significant (effect size = +0.014 at the end of the first year and +0.07 at the end of the second year). The effect of the programme on teacher evaluation ratings and teacher retention was not significant, although more teachers in the programme group left after two years than in the control group.
Source: Impacts of the retired mentors for new teachers program (REL 2017–225) (March 2017), US Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Central.
The edTPA is an assessment in the US, introduced in 2013, that evaluates prospective teachers’ classroom performance. It is used by more than 600 teacher education programmes in 40 states, and passing it is a requirement for licensure in 7 states. In an attempt to discern whether the test can accurately determine if teacher candidates who achieve higher scores on this test help their students better than lower-scoring candidates, The National Center for Analysis of Longitudinal Data in Education Research (CALDER) conducted the first independent study of edTPA, and found mixed results.
The study followed 2,300 teacher candidates in Washington State who took the edTPA in 2014. Their scores were correlated with their students’ standardised test scores in reading and maths. The study found that new teachers who passed the edTPA on their first try increased their students’ reading achievement scores more than new teachers who didn’t pass edTPA on their first attempt. There were no differences regarding the effects on students’ math scores.
The authors discuss the complicated implications of these findings for policy and practice. For example, they state that new teachers who fail the test the first time may ultimately become high-performing teachers, and warn of screening them out of the workforce.
Source: Evaluating Prospective Teachers: Testing the Predictive Validity of the edTPA (2016), National Center for Analysis of Longitudinal Data in Education Research (CALDER)
The Learning Policy Institute has published a review of research into teacher effectiveness as teachers become more experienced. The review takes advantage of advances in research methods and data systems that have allowed researchers to more accurately answer this question. Specifically, by including teacher fixed effects in their analyses, researchers have been able to compare a teacher with multiple years of experience to that same teacher when he or she had fewer years of experience.
The report reviews 30 studies published within the last 15 years that analyse the effect of teaching experience on student outcomes in the United States. The review concludes that:
- Teaching experience is positively associated with student achievement gains throughout a teacher’s career. Gains in teacher effectiveness associated with experience are most steep in teachers’ initial years, but continue to be significant as teachers reach the second, and often third, decades of their careers.
- As teachers gain experience, their students not only learn more, as measured by standardised tests, they are also more likely to do better on other measures of success, such as school attendance.
- Teachers’ effectiveness increases at a greater rate when they teach in a supportive and collegial working environment, and when they accumulate experience in the same grade level, subject, or district.
- More experienced teachers support greater student learning for their colleagues and the school as a whole, as well as for their own students.
Source: Does Teaching Experience Increase Teacher Effectiveness? A Review of the Research (2016), Learning Policy Institute.
Removing low-performing teachers can improve outcomes for students, according to a working paper from the National Bureau of Economic Research.
The study looked at the effect of IMPACT, a performance assessment and incentive system introduced in schools in Washington, DC. IMPACT evaluates all teachers every year based on multiple measures of effectiveness (eg, clearly described standards, several teacher observations by different observers, and student outcomes). Teachers receive one of four ratings – highly effective, effective, minimally effective, or ineffective. Those rated ineffective (or minimally effective two years running) were dismissed. Total teacher turnover rates in the schools were higher (18% per year) than in similar non-participating schools (8-17%). Turnover rates among low-performing teachers (minimally effective or ineffective) were 46%, and for high-performing teachers (effective or highly effective) 13%.
The study found that there was a small increase in student achievement in mathematics (effect size of +0.079) that was statistically significant, and a smaller increase for reading (+0.046) which was not statistically significant. Turnover of high-performing teachers had a negative but not statistically significant effect on student achievement in mathematics (-0.055). Turnover of low-performing teachers improved student performance in mathematics (+0.21) and reading (+0.14).
The authors conclude that under a robust system of performance assessment, turnover of low-performing teachers can generate meaningful gains in student outcomes, particularly for the most disadvantaged students.
Source: Teacher Turnover, Teacher Quality, and Student Achievement in DCPS (2016), The National Bureau of Economic Research.
In response to the lack of evidence surrounding debate in the US over whether pupils are being over tested, the Council of the Great City Schools has conducted a detailed study on testing. They examined test practices at primary and secondary level in 66 of the largest urban school districts during the 2014-15 school year.
The authors found that the average pupil took eight standardised tests a year. Grade 8 (Year 9) pupils were tested the most, spending an average of 4.22 days per year being tested. Yet there was no correlation between the amount of test time and maths and reading achievement.
The study also revealed a number of problems with testing. States reported having to wait 2-4 months for school-level test results meaning the data could not usefully guide teaching, test results were used in ways they weren’t intended to be (eg, to judge an individual staff member’s performance), the tests themselves were not an accurate measure of content knowledge, and pupils were tested in the same subject more than once for different reasons.
A survey of the parents revealed that they support testing that accurately reflects their child’s performance in school, and that they do not support more difficult tests.
Source: Student testing in America’s great city schools: An inventory and preliminary analysis (2015), Council of the Great City Schools.