Getting Evaluations Right – How to Get Better at Getting Better

5 ways grantees can get better at getting better

Philanthropy has come a long way in evaluating and reporting on whether funded projects are working. Instead of furtively redirecting underperforming grants or attempting to shine up less-than-stellar results for an annual report, foundations are doing one better: learning to get better at getting better.

When well designed and implemented, evaluations can provide useful information and insights that drive strategy and impact. The key is providing grantees feedback in ways they can use to learn what’s working (or not) and act on that information to continuously improve. Five lessons are emerging:

1. Shorter feedback loops

When grantees receive evaluation feedback early and more often, they can make the changes necessary to deliver better results.

The Doceō project, launched by the J.A. and Kathryn Albertson Foundation (JKAF), established technology education centers at Northwest Nazarene University and the University of Idaho. The goal of the center is to help student teachers and in-service teachers use technology to engage students in deeper, more relevant learning.

To ensure only the best, most effective ideas make it to the classroom, they incorporated a “skunk works” lab to test ideas with rapid feedback loops and a flexible, iterative design to fine tune and make them “student ready.”

Shorter feedback loops allow grantees to avert failure and build on success more quickly and efficiently.

2. Quantitative and qualitative data

The best evaluations use more than one way of knowing to clarify the results picture and deepen understanding.

JKAF uses structured reports that capture what was done and the impact of the activities, as well as changes made as a result of lessons learned. The work to document the Doceō project’s efforts and outcomes (both quantitative and qualitative) encourages reflection by the grantees and challenges them to continuously look for ways to improve results. The project is only a few years old and is now honing more ways to quantify outcomes.

JKAF and many others may benefit from a look at the James Irvine Foundation’s Linked Learning data system, which allows grantees to drill down and assess progress and compare populations on a range of metrics. This is coupled with a broader evaluation that provides qualitative information behind the numbers to tell an instructive story about the data.

3. Leading indicators

Too often the success or failure of education is measured after the fact. Education reporting and data-collection systems almost exclusively focus on metrics like graduation rates, test scores, and even employment, which are reported too late to be acted upon. While important and representative of goals we as a nation must attain, they are not designed to help those in the delivery of education do the work required to meet those goals. By shifting the focus from lagging to leading indicators, educators can better respond to students’ needs in time to make a difference.

Looking upstream after establishing intended outcomes for funding will reveal leading indicators that provide early warning signals for projects not headed in the right direction. Tracking graduation rates is an important indicator of success, but this lagging indicator provides no room for improvement — or indication of where to improve — if projects are not meeting their targets. Leading indicators, such as attendance, course success, and math progression rates, to name a few, can provide early warning in time for action and direct stakeholders to where and who needs attention. Much of the criticism of philanthropy’s education work focuses on failures of implementation. However, with indicators available to identify these failures early enough to fix them, everything changes.

4. Visualize and make data actionable

Organizations can be datapaloozas — like the Robin Hood Foundation, which evaluate grants using 163 different formulas and reports the results with graphics that tell a story.

The holy grail of evaluation is taking data out of its fields and putting it to work on behalf of grantees. Dashboards and interactive visualization tools are useful in helping grantees peel back layers to compare populations, reveal disparities and areas of progress as well as need.

For example, NPR used data from Achieve and the NCES Common Core of Data to create a searchable database and graphics that help bring to life a complicated truth about graduation rates. This kind of reporting and presentation provokes interest in the why and for whom.

Routine collaboration around data creates a culture of reflection on what has been accomplished (and what has not). This is the hallmark of learning organizations committed to continuous improvement.

5. Open source what works and doesn’t

In February, billionaire Eli Broad announced the suspension of his foundation’s $1 million prize given to the best urban school systems out of concern that schools are not improving fast enough. The bold announcement was a result of deep disappointment after watching rock star district performances backslide under changing leadership.

Imagine if the Broad Foundation open sourced the leadership challenges it encountered and invited groups that have been working to strengthen and stabilize leadership for decades, such as the Wallace and Carnegie Foundations, to help them problem solve. This could be an innovative way to address the systemic issues the Broad Academy outlined in a recent Medium post. Pharmaceutical companies and medical research increasingly open source research failures in pursuit of greater efficiencies and breakthroughs in finding new cures and treatments.

Collective sharing and problem solving is at the heart of Nashville Public Education Foundation’s Project RESET. The effort recognizes that education perpetually suffers from solutions that tackle one part of the problem, but miss the whole.

In philanthropy, methods fail all the time, but stigma, a lack of courage, or simply not knowing what to do keep us from learning and changing. If we evolve our approach and tools for the job, evaluations can be more relevant and useful and help philanthropy achieve desired, lasting changes.

By Jordan Horowitz | Originally posted on medium.com