Data Use Principles for Education Leaders

In our recent book from Harvard Education Press, Creating a Data-Informed Culture in Community Colleges: A New Model for Educators, my colleague Brad Phillips and I present a data use model for student success grounded in the latest research on how people and organizations process information. Educators have focused on increasing data literacy for a few decades now, with little movement on the needle for increasing student success. We argue that, with so many advances in understanding human neuroscience, judgment and decision-making, and organizational habits, educational institutions should capitalize on what we have learned about our ability to present information in ways that will maximize its use.

Educational leaders have a central role to fulfill in improving the use of information to support student success. There are many lessons to be gleaned from our book. I thought I’d present just five important ones here.

First, you know the old saw about the difference between data and information. Unfortunately, most people think they know what it means but we argue they do not. Unlike data, information is usable, useful, and actionable. To be usable, it must be in a format that is easily understood. The story must be clear. Figuring out what’s important in a table should not be like playing Where’s Waldo?. To be useful, it must link clearly to student success efforts. When we coach educational institutions we do not allow anything to be presented, “for information only.” Our rule is that everything must be presented for one of two reasons: (a) it is mandated for compliance reporting or (b) it is linked to student success efforts. The information must also be actionable; it should not be a dead end. All information should answer the question, “Are we being effective in our student success efforts?”

Second, know that different information is needed by different folks. Administrators typically focus on the big goals such as graduation rates, persistence to degree, or transition to the next stage in a student’s life (middle to high school, high school to postsecondary education or employment, two-year college to four-year university or career, etc.). These are lagging indicators, which cannot be influenced directly. Once you have this information about a group or cohort of students, they are already out the door. Faculty and program staff, on the other hand, focus on leading indicators. Leading indicators are actionable and lead to your lagging indicators. Examples of leading indicators are course success rates, formative test results, attendance, and term to term persistence. When presented with this information, subpopulations of students can be identified for supports and decisions can be made whether to modify an intervention to improve success, scale it up from an initial pilot, or abandon it for something potentially more impactful.

Third, reduce the amount of information disseminated. When we work with educational institutions we’re typically introduced to volumes of tables and charts. New student information systems come preloaded with canned reports on all kinds of metrics. However, just because it’s available doesn’t mean it’s necessary. Go back to the first lesson, above. All of this data causes confusion and doesn’t tell the actionable story. Educators should not have to be analysts. Find out the metrics—the leading indicators—your faculty and staff need to decide if they’re being successful and what changes they need to make. Then give it to them. We advocate for focusing on what matters rather than providing everyone on campus with an overwhelming amount of data that, ultimately, is useless to them because they do not know what information is the most important.

Fourth, focus on high-impact, research-based, scalable interventions—whether they’re student supports, policies, professional development, or others. While there are a number of high-impact, research-based interventions, the actual intervention needs to be aligned with the challenge presented. If a college has an issue with course completion, for example, a First Year Experience program can help a little, but it is really designed to help students “do college” and mostly affects term-to-term persistence. If a high school is struggling to increase success among English learners, don’t choose a support program that is geared to the general student population. Avoid implementing too many interventions at small scale; what we call the ornaments on a tree approach. Go for one or at most two big bets.

Fifth, hold folks accountable for their leading indicators. Start with a logic model that clearly describes what you plan to do and why you expect it to make a difference. Your logic model should identify the lagging indicators or big goals, typically referred to as logic model outcomes. Backward map to the leading indicators you intend to track and which, if accomplished, ensure your lagging indicators will be achieved. In a logic model these leading indicators are typically are referred to as outputs. Then hold folks accountable for these leading indicators. Holding faculty accountable for graduation rates is frustrating because they cannot directly influence that indicator. But holding them accountable for course success rates (students demonstrating proficiency) is within their control. They can use formative assessments to identify which students require extra supports. High school faculty can identify students with chronic absenteeism to implement related interventions.

Being a good data use leader means enabling focus by reducing the amount of information available, ensuring the information is easily understood and supports action, and holding folks accountable for indicators within their power to change. When it comes to using data to support student success, less is definitely more.

epfpOriginally posted by the Institute for Educational Leadership under the Education Policy Fellowship Program alumni blog December 2017.