Moving the Needle on Community College Student Success

America’s community colleges work hard to serve their students. Dedicated faculty, staff, and administrators put in countless hours and commit their professional lives to improving student outcomes. But working hard does not mean that their efforts are making a demonstrable difference in student success. According to the National Center for Education Statistics, the graduation rate within 150 percent of normal time from first institution attended for first-time, full-time degree/certificate-seeking students at two-year postsecondary institutions has remained fairly constant for the past thirteen years, ranging from 30.5 to 34.0 percent (2005 and 2008 cohorts, respectively). The most recent cohort (2012 first term) has a rate of 31.6 percent. While part-time student data is not available, community college educators know that the college completion rate for these students is much lower than their full-time student counterparts.

The way colleges plan and develop their efforts contributes to this lack of impact. In reviewing many college strategic plans, one glaring issue emerges over and over: they try to address too many things, and that gets in the way of making real and lasting change. We have seen plans that have as many as fifty goals! Colleges are proud of these plans, believing they are doing so much to improve student success. But a college—or for that matter, any organization—cannot develop and implement interventions and supports to effectively address this abundance of goals.

In our book, Creating a Data-Informed Culture at Community Colleges, the case is made that a lack of focus gets in the way of making real increases in student success. Just because data is available on a given metric does not mean it is an appropriate target for improvement. We argue that colleges try to make improvements on too many metrics, and in doing so dilute limited resources across too many activities. The result: colleges end up with watered down interventions and supports. By trying to do everything, colleges end up not implementing their changes with fidelity, and certainly not at the scale needed to move the needle on student success.

To increase the likelihood that a college’s strategic planning efforts will be a meaningful and impactful driver of positive change, we offer some brief thoughts that extend the concepts in our book.

We argue that colleges focus on the wrong metrics. Too many of the metrics targeted are not directly actionable. They are lagging indicators such as completion rates and transfer rates. The students reported in this data have already left—successfully completed or dropped out. Instead, colleges need to focus on leading indicators—actionable indicators that, if focused on, will lead to success on lagging indicators—such as course retention and course success rates. These are metrics related to students who still can benefit from programs, policies, interventions, and supports. We recommend focusing planning, development, and implementation efforts on no more than two leading indicators.

Colleges that have made a commitment to focus deeply on no more than a couple of leading indicators have made great gains, moving the needle for all students. These gains are not just in their leading indicators but in those lagging indicators that are directly influenced by leading indicators.

But how did they do it? We recommend a five-step model:

1.  Begin with a focus on data related to leading indicators. Not just any leading indicators, but those that have the most chance that, if effectively improved, will lead to gains in important lagging indicators such as completion.

2.  Next, colleges need to focus on impactful interventions aligned to the challenges their data presents. While there are a number of high-impact, research-based interventions, the actual intervention needs to be aligned with the challenge presented. If a college has an issue with course completion, for example, a First Year Experience program can help a little, but it is really designed to help students “do college” and mostly affects term-to-term persistence.

3.  The next step is developing a plan for gaining faculty, staff, and administrator buy-in. Great strategic plans die on the shelf if the college community has not been engaged and has little voice in developing the plan.

4.  Fourth, once there is buy-in, an implementation plan needs to be constructed. What we find fascinating is that colleges, with the best of intentions, struggle with implementation. They have difficulty ensuring that a new policy or practice is implemented well. It is important that colleges consider a project management approach, rather than expecting already busy staff to execute on a new activity.

5.  Finally, there needs to be ongoing monitoring of the fidelity of implementation and the development of a strong evaluation plan, including a calculation for return on investment. Evaluation science is more than just collecting data on leading and lagging indicators. It also includes collecting qualitative data from those involved—faculty, program staff, and students—to determine “why” changes in practice and policy did or did not have the desired effect. And recommendations for improvement must be offered.

What we are suggesting differs from the typical strategic planning process. Initially, it can feel like colleges are not doing enough. But if we are going to move the needle in a positive direction on student success, colleges can’t continue to expect their existing processes to make the needed and hoped for changes in policy and practice.

voices in ed 1This blog post originally appeared on Voices in Education, the blog of Harvard Education Publishing Group, on August 30, 2017 with the title “Moving the Needle on Community College Student Success.”