Juanjuan Chen and colleagues recently performed a meta-analysis on the effects of computer-supported collaborative learning (CSCL).
Using 425 empirical studies (all of which used a controlled experimental or quasi-experimental design) published between 2000 and 2016, researchers found several main characteristics to examine: the effects of the collaboration itself; the effects of computer use during collaboration; the effects of extra technology-related learning tools used in CSCL, such as videoconferencing and sharing visuals with team partners; and strategies such as role assignment and peer feedback.
Collaborative learning itself positively affected:
- Knowledge gain (+0.42)
- Skill acquisition (+0.62)
- Pupil perceptions of the experience (+0.38)
The use of computers, when combined with collaborative learning, positively affected:
- Knowledge gain (+0.45)
- Skill acquisition (+0.53)
- Pupil perceptions (+0.51)
- Group task performance (+0.89)
- Social interaction (+0.57)
Lastly, extra technology-related learning tools during CSCL positively affected knowledge gain (+0.55), as did the use of strategies (+0.38).
Source: The role of collaboration, computer use, learning environments, and supporting strategies in CSCL: A meta-analysis (December 2018), Review of Educational Research, 88(6).
A meta-analysis published in Review of Educational Research summarises findings from studies that evaluated the effects of in-service training for early childhood teachers on the quality of early childhood education and care (ECEC) and child outcomes. Overall, data from 36 studies with 2,891 teachers was included in the analysis. For studies to qualify, child care quality had to be measured externally with certified raters at the classroom level.
The analysis, carried out by Franziska Egert and colleagues, revealed that at the teacher level, in-service training had a positive effect on the quality of ECEC, with an effect size of +0.68. Furthermore, a subset of nine studies (including 486 teachers and 4,504 children) that provided data on both quality ratings and child development were analysed, and they showed a small effect at the child level (effect size = + 0.14) and a medium effect at the corresponding classroom level (effect size = +0.45).
Source: Impact of In-Service Professional Development Programs for Early Childhood Teachers on Quality Ratings and Child Outcomes: A Meta-Analysis, Review of Educational Research, 88:3 401 – 433.
Matthew A Kraft and colleagues conducted a meta-analysis of the causal evidence on the effect of teacher coaching on teaching and learning. Their paper, published in the Review of Educational Research, reviewed 60 studies on teacher coaching programmes conducted after 2006 that measured the impact of teacher coaching on either teaching (measured using tools such as the Classroom Assessment Scoring System or the Early Language and Literacy Classroom Observation) or pupil academic performance (measured by standardised tests).
Their results found that sustained coaching improves both classroom teaching and pupil achievement, with pooled effect sizes of +0.49 standard deviations for teaching and +0.18 standard deviations for academic achievement.
However, the effectiveness of a teacher coaching programme seems to be determined by the number of participants. When studies were divided into programmes that had fewer than 100 participants and those that had more than 100 participants, the impact on teaching was nearly double for the smaller programmes than for programmes with more than 100 participants. For pupil achievement, the smaller programmes showed an impact of nearly three times that of the larger programmes.
Source: The effect of teacher coaching on instruction and achievement: A meta-analysis of the causal evidence (February 2018), Review of Educational Research, Vol 88, Issue 4
Franziska Egert and colleagues in Germany and Amsterdam have conducted a review of the effects of professional development (PD) for early childhood educators on programme quality and children’s educational outcomes.
Studies were only included if they addressed quality of child care or child development, included early childhood teachers (including preschool, kindergarten and centre-based care), were quantitative, were experimental or quasi-experimental, reported effect sizes or data and addressed children 0–7 years old. This yielded 36 studies of 42 programmes evaluating quality ratings, and nine studies of 10 programmes evaluating both quality ratings and pupil outcomes.
Results showed that professional development improved the external quality ratings (as evaluated using the Classroom Assessment Scoring System, Early Language and Literacy Classroom Observation, Environmental Rating Scales and Individualized Classroom Assessment Scoring System) of early childhood education (effect size=+0.68), with programmes providing 45–60 PD hours having the greatest impact on classroom practice as compared to programmes offering fewer or more hours. This was true regardless of whether teachers held a university degree or not. Further, programmes that solely used coaching were almost three times as effective as other programmes. A second meta-analysis of a subset of studies (n=486 teachers, 4,504 children) showed that improvement in the quality of early childhood education programmes was correlated with improvements in child development (effect size=+0.14) as determined by language and literacy scores, maths scores, social-behavioural ratings, and assessment of cognition, knowledge and school readiness.
Source: Impact of in-service professional development programs for early childhood teachers on quality ratings and child outcomes: a meta-analysis (January 2018), Review of Educational Research, Vol 88, Issue 3
A systematic review and meta-analysis published in Review of Education Research looks at effective academic interventions for pupils with low socio-economic status (SES).
Jens Dietrichson and colleagues included studies that used a treatment–control group design, were performed in Organisation for Economic Cooperation and Development (OECD) and EU countries and measured achievement with standardised tests in maths or reading. The analysis included 101 studies performed between 2000 and 2014, 76% of which were randomised controlled trials.
Positive effect sizes (ES) were reported for many of the interventions. Comparatively large and robust average effect sizes were found for interventions that involved tutoring (ES = +0.36), feedback and progress monitoring (ES = +0.32) and co-operative learning (ES = +0.22). The report points out that, although these effect sizes are not large enough to close the gap between high- and low-SES pupils, they represent a substantial reduction of that gap if targeted towards low-SES students.
Source: Academic interventions for elementary and middle school students with low socioeconomic status: a systematic review and meta-analysis (January 2017), Review of Educational Research
Olusola O Adesope and colleagues conducted a meta-analysis to summarise the learning benefits of taking a practice test versus other forms of non-testing learning conditions, such as re-studying, practice, filler activities, or no presentation of the material.
Analysis of 272 independent effect sizes from 188 separate experiments demonstrated that the use of practice tests is associated with a moderate, statistically significant weighted mean effect size compared to re-studying (+0.51) and a much larger weighted mean effect size (+0.93) when compared to filler or no activities.
In addition, the format, number and frequency of practice tests make a difference for the learning benefits on a final test. Practice tests with a multiple-choice option have a larger weighted mean effect size (+0.70) than short-answer tests (+0.48). A single practice test prior to the final test is more effective than when pupils take several practice tests. However, the timing should be carefully considered. A gap of less than a day between the practice and final tests showed a smaller weighted effect size than when there is a gap of one to six days (+0.56 and +0.82, respectively).
Source: Rethinking the use of tests: A meta-analysis of practice testing (February 2017), Review of Educational Research DOI: 10.3102/0034654316689306