Report finds mixed effects for Saxon Math

The Institute of Education Sciences (IES) in the US has released a new research report on Saxon Math, and findings show mixed results for the programme.

Saxon Math is a curriculum for pupils in grades K-12 (Years 1-13). It uses an incremental structure that distributes content throughout the year. For the IES report, researchers reviewed studies of Saxon Math’s primary courses, which include kindergarten (Year 1) through pre-algebra. Out of 26 studies eligible for review, five studies fell within the scope of the What Work Clearinghouse’s (WWC) primary maths topic area and met WWC design standards. These five studies included 8,855 pupils in grades 1–3 and 6–8 in 149 schools across at least 18 states.

According to the report, the estimated impact of the intervention on outcomes in the mathematics achievement domain was positive and substantively important in two studies and indeterminate in three studies. The authors conclude that Saxon Math has mixed effects on maths test scores of pupils in primary classes.

Source: Saxon Math (May 2017), US Department of Education, Institute of Education Sciences, What Works Clearinghouse

What do pupils believe about learning and intelligence?

This study examined reported attitudes and beliefs about growth mindset (the belief that intelligence and academic ability are not fixed and can be increased through effort and learning) for a sample of 103,066 pupils and 5,721 teachers in grades 4–12 (Years 5–13) in Nevada’s Clark County School District in the US.

Three-quarters of pupils reported having beliefs that are consistent with a growth mindset. The average growth mindset score across all pupils was 4 on a scale of 1 to 5 (where 1 indicates agreement with all statements that suggest a fixed-ability mindset, and 5 indicates disagreement). In addition, reported beliefs were found to differ depending on pupils’ ethnicity, school year, prior achievement and whether pupils were native English speakers or not. For example, the average growth mindset score for pupils with English as an Additional Language (EAL) was lower (3.5) than the average growth mindset score for non-EAL pupils (4.0). Lower-achieving pupils reported lower levels of growth mindset than their higher-achieving peers (a difference of 0.8 points).

Teachers’ average growth mindset score was 0.5 points higher than their pupils’ (4.5 compared with 4.0). For the most part, their beliefs regarding growth mindset did not vary significantly depending on the characteristics of the pupils attending their schools.

Source: Growth mindset, performance avoidance, and academic behaviors in Clark County School District (REL 2017–226) (April 2017), US Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory West

What is the evidence to support reading interventions?

A review from the National Center for Education Evaluation and Regional Assistance in the US assesses the evidence base supporting reading interventions in grades 1–3 (Years 2–4 in the UK) to improve reading outcomes for pupils struggling with typical classroom reading lessons.

The findings are based on studies of 20 interventions conducted in the US that Russell Gersten and colleagues identified that met the What Works Clearinghouse evidence standards. Of these 20 interventions, 19 produced positive or potentially positive effects in at least one area of reading. Interventions in grade 1 (Year 2) produced lower effects in reading comprehension (+0.39) than in word and pseudo-word reading (+0.45), but higher effects than in passage reading fluency (+0.23). For grade 2 and 3 (Years 3 and 4) interventions, the weighted mean effects in reading comprehension (+0.33) were lower than those for both word and pseudo-word reading (+0.46) and passage reading fluency (+0.37). The strongest and most consistent effects were found in word and pseudo-word reading for all three grades.

Although the evidence supports the efficacy of reading interventions, the review points out that the majority of interventions evaluated are interventions for individual pupils, as opposed to small-group interventions which are more typical in school settings. In addition, most of the interventions include high levels of ongoing support for teachers.

Source: What is the evidence base to support reading interventions for improving student outcomes in grades 1–3? (April 2017), US Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Southeast (REL 2017–271)

Using data to help ensure equity in suspensions

A new guide is available from the Institute of Education Sciences to help educators in the US to use data to determine if any ethnic groups are being disproportionately suspended or expelled within a school or district, and if so, how to use data to promote equity among all ethnic groups.

The guide is divided into two sections. The first describes how to use multiple data to analyse if a group is being disproportionately suspended or expelled, and how to determine the effectiveness of any interventions that might be in place. It also describes the data that can be used to analyse factors that may be contributing to any disproportion. In cases where a school or district determines there are inequalities that may be unjust, the second section outlines a process that helps promote equitable discipline, called Plan-Do-Study-Act. One district’s successful experience using the Plan-Do-Study-Act process is described in detail. The back of the guide contains websites and resources related to equity in school discipline and quality improvement processes.

Source: School discipline data indicators: A guide for districts and schools (April 2017), US Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Northwest (REL 2017–240)

Professional development not the answer to improving maths achievement

Developing Mathematical Ideas (DMI) is a professional development programme designed to increase teachers’ knowledge of fourth grade (Year 5) maths fractions and rational numbers with the ultimate goal of improving their pupils’ maths achievement.

A study conducted in the 2014–15 school year, prepared for the Institute of Education Sciences by Madhavi Jayanthi and colleagues at Instructional Research Group and REL Southeast, investigated the effects of DMI on teacher content knowledge and their pupils’ subsequent achievement in fractions. A total of 264 fourth grade (Year 5) teachers in 84 elementary (primary) schools in Florida, Georgia and South Carolina in the US were randomly assigned by school to receive either DMI (n=42 schools, 129 teachers) or their usual professional development programme (n=42 schools, 135 teachers). The 84 schools were matched on grade four enrolment, number of pupils who exceeded fourth grade maths standards, percentage of African American and Hispanic pupils and percentage of pupils eligible for free- or reduced-price lunches. In autumn 2014, DMI teachers received eight three-hour training sessions conducted over four days, followed by homework and concluding with a test on fractions. A total of 4,204 fourth grade pupils’ (2,091 E, 2,113 C) baseline scores on third grade standardised tests were used as a pre-test, because most third graders know little about fractions and the Test for Understanding of Fractions was used as the post-test at the end of the academic year to measure their knowledge gain after their teachers had completed DMI.

Results showed no significant differences between either the DMI or non-DMI teachers’ knowledge of fractions and their pupils’ proficiency in fractions.

Source: Impact of the Developing Mathematical Ideas professional development program on grade 4 students’ and teachers’understanding of fractions (March 2017), US Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Southeast.

Examining the results of SIG funding

Former President Obama’s American Investment and Recovery Act of 2009 included $3 billion of funding for School Improvement Grants (SIG). SIG awards went to states’ lowest-performing schools who agreed to implement improvements using either the turnaround, transformation, restart, or closure models, and using four main improvement practices: adopting comprehensive school reform programmes; developing teacher and principal effectiveness; making more time for learning and creating community-orientated schools, and providing support and operational flexibility for schools.

Given the size and expense of the SIG programme, The Institute of Education Sciences at the Department of Education commissioned a report by Lisa Dragoset and colleagues at Mathematica Policy Research, and Cheryl Graczewski and colleagues at the American Institutes for Research, to investigate to what extent the SIG-funded schools used the recommended practices, how these schools compared to non-funded schools, the effect of SIG funding on student outcomes, and which of the intervention models was most effective.

Researchers found that the use of SIG funding had no effect on pupil outcomes in maths or reading test scores, high school graduation, or likelihood to attend college. No SIG model was associated with more gains than another at the elementary (primary school) level, although in grades 6-12 (Years 7-13), SIG-funded schools using the turnaround model were associated with higher pupil maths achievement than the transformational model. More recommended improvement practices were used in SIG-funded schools than in non-funded schools, although not significantly so, and were implemented most often in schools using the school reform model. These findings indicate that SIG funding did not significantly impact pupil achievement outcomes or increase the use of recommended practices, at least for schools near the SIG funding cut-off. They noted that results might be different for schools not near the SIG-funding cut-off.

Source: School Improvement Grants: implementation and effectiveness (January 2017), Institute of Education Sciences