by Brad C. Phillips
What if the solution to what ails education was a simple matter of timing? We know the fastest response times win the day in health care, retail, and other sectors, so why not education? The answer lies in when education’s data and accountability systems provide information – is it in time or too late?
Too often the success or failure of education is measured after the fact. Education reporting and data collection systems almost exclusively focus on metrics like graduation rates, test scores, and even employment, which are reported too late to be acted upon. While important and representative of goals we as a nation must attain, they are not designed to help those in the delivery of education do the work required to meet those goals.
We post the signs for what we want performance to be and publicly report offenders, but do not provide educators with the right gauges for monitoring needs and providing timely responses. Drivers of education need indicators they can respond to in time to make a difference for students.
By shifting the focus from lagging to leading indicators, educators can better respond to students’ needs “in time.”
Leading or lagging – how do you know?
By shifting the focus from lagging to leading indicators, educators can better respond to students’ needs “in time.” But the distinction between leading and lagging depends on where you sit.
Consider these examples:
15 percent of incoming freshmen students will need remedial coursework.
Leading or lagging?
While a lagging indicator for those involved in a high school education that didn’t prepare these students, it is a leading indicator for college faculty and administrators. Less than 50 percent of remedial students complete their recommended remedial courses. Less than 25 percent of remedial students at community colleges earn a certificate or degree within eight years. Effectively responding to the needs of students in remedial courses can go a long way in reducing dropouts and helping students complete a degree.
87 percent of the incoming freshman class progresses to their sophomore year.
Leading or lagging?
While important to measure and know, this is a lagging indicator for most college faculty. The dropouts have already dropped out.
In-class retention improved each semester this year.
Leading or lagging?
No one knows better than faculty that students who fall behind after their first class find it nearly impossible to catch up. Selecting leading indicators like in-class retention numbers – including whether a student showed up for class, completed assignments, and/or met with advisors and faculty – puts faculty in the drivers seat to take action to avert students’ dropping out.
Often those determining accountability metrics either miss what is needed to improve, or assume that by setting a standard, educators will rise to the occasion. It’s an assumption that sets educators up to fail. Instead, asking those closest to the students to select the measures and data ensures they have the best information to support students’ success. These leading indicators help predict whether students may need more support to stay on track to graduate.
Odessa College is a great example of leadership and faculty taking a hard look at performance data and agreeing they had to do better by students. By identifying indicators within their control and that influence lagging indicators, they were able to chart a path of progress.
Match solutions to problems
The education sector remains faced with a challenging and important feat: finally closing persistent achievement gaps that keep far too many students from enjoying the economic advantages associated with earning a college degree. By identifying and responding to more leading indicators – essentially matching solutions to problems – educators can access the continuous feedback necessary to respond to students needs in time to keep them on track to earning a degree.
First appeared in the Michael and Susan Dell Foundation Blog May, 2015.