PMR2

About the Measures

Our work on the development of practical measures, routines, and representations is inspired by the Carnegie Foundation for the Advancement of Teaching and, in particular, Tony Bryk and colleagues’ use of improvement science to address persistent educational problems (Bryk, 2009). Improvement science consists of a set of principles, tools, and methodologies aimed at supporting educators to use scientific inquiry to develop solutions to practical problems that impact their own educational settings.  Such solutions can then be adapted to support improvement efforts in other contexts.

A leading principle of improvement science is that “we cannot improve at scale what we cannot measure” (Bryk, Gomez, Grunow, & LeMahieu, 2015). As such, a key tool of improvement science concerns what Carnegie has called “practical measures,” or “measures for improvement” (Yeager, Bryk, Muhich, Hausman, & Morales, 2013).  Practical measures are designed to provide practitioners with frequent, rapid feedback that enables them to assess and improve their practices.

Characteristics of practical measures include the following (adapted from Bryk et al., 2015; Yeager et al., 2013):

  • The focus of the measure is specific to an improvement goal.
  • The measure uses language that is relevant and meaningful to practitioners.
  • Data collection and analysis are undemanding and can be easily embedded in practitioner routines, thus making it feasible to use the measure on a monthly, weekly, or even daily basis.
  • The measure is sensitive to change.
  • The data produced by the measure are relevant to practitioners and have implications for action.

We are currently developing a suite of practical measures that provide information about students' experiences of high-leverage aspects of middle-grades mathematics instruction and the quality of supports for teachers to improve their classroom practices. We are designing these measures so that they can be used to inform instructional improvement efforts.

We have developed a survey assessing students’ experiences of discussion (small group discussion and whole class discussion), a survey assessing students’ experiences of the introduction to – or launch of – mathematical tasks, and a tool assessing the rigor of mathematical tasks that the teacher has selected for a lesson. We have focused on discourse, mathematics tasks, and the launch of mathematics tasks for two reasons: 1) district leaders and school-based coaches in our partner districts identified these as key areas for improvement; and 2) mathematics education research suggests the rigor, or cognitive demand, of a task (Stein, Grover, & Henningsen, 1996), the quality of discussion (Franke, Kazemi, & Battey, 2007), and the quality of the teacher’s launch of a discussion (Jackson, Garrison, Wilson, Gibbons, & Shahan, 2013) matter greatly for students’ development of deep mathematical understandings and productive mathematical dispositions.

An assumption of our work is that using tools, by themselves, is unlikely to support instructional improvement. Instead, the use of these tools needs to be embedded in ongoing professional learning. Current work includes developing routines for implementing them as part of ongoing professional learning, and on developing data representations that are useful for a range of users including teachers, instructional coaches, and district mathematics leaders.

These tools are works in progress, and we’re hoping that others will help us improve them by trying them and adapting them to their specific improvement initiatives and organizational contexts. If you are interested in trying out the tools, you can learn more about their focus and development in our White Paper and access them here.