Funding partners want to invest in organizations and programs most likely to succeed in helping people – not because those programs make money. Because “helping people” isn’t easily monetized (though many try!), a critical part of nonprofit storytelling is helping funders understand why and how their investment results in lasting change. Defining what it means to help people is a value-driven and data-informed conversation. We have created this document to help you talk to funders about return on investment.
Fund for Shared Insight pools financial and other resources to provide grants, coaching, inspiration, and community-building through collaborative philanthropy. Our work reflects our commitment to the kind of listening and learning that values lived experience and advances equity. The overarching goal is that foundations and nonprofits be meaningfully connected to the people and communities most harmed by structural racism and other systemic inequities, and more responsive to their insights and feedback. Listen4Good is a capacity-building initiative of the Fund for Shared Insight that helps direct-service organizations listen and respond to the people and communities at the heart of their work. Listen4Good’s suite of specially designed programs offers the expert tools, resources, and coaching organizations need to build high-quality feedback loops that advance equity and lead to positive changes in the ways they make decisions, deliver services and partner with clients.
For purpose-driven organizations, data means more than just numbers and graphs–it is about understanding what more you can do to change lives and strengthen communities. The Data Playbook, from Charles and Lynn Schusterman Family Philanthropies, provides the building blocks you need to put data to work for your mission through a measured approach to understanding and telling stories of impact.
The Bill & Melinda Gates Foundation Guide to Actionable Measurement is driven by three basic principles: 1) Measurement should be designed with a purpose in mind — to inform decisions and/or actions; 2) Do not measure everything but strive to measure what matters most; 3) Because the foundation’s work is organized by strategies, the data we gather help us learn and adapt our initiatives and approaches. This guide includes a results matrix, definitions of terms in our results hierarchy, and a set of measurement guidelines intended to shape internal decisions about depth, breadth, and rigor of measurement across grants and within strategies. The guide also highlights the good practices they aspire to follow to be good stewards and not increase the reporting burden faced by our grantees or distract from their work.
This is the first in a Urban Institute series of guides to help nonprofit organizations that wish to introduce or improve their efforts to focus on the results of their services. This first guide, entitled Key Steps, provides an overview of the outcome management process, identifying specific steps and providing suggestions for examining and using the outcome information
The GuideStar Common Results Catalog allows organizations to measure progress and results. Since the metrics your organization shares are your choice, they should reflect what you already collect and use. To help you think about them, the Common Results Catalog was created. This catalog contains all of the metrics currently in our database—by subject area—developed in consultation with teams of experts. Browse the catalog to see what metrics make sense for your organization. If you don’t find a metric that fits, you can add a custom metric.
Leaders of organizations in the social sector are under growing pressure to demonstrate their impacts on pressing societal problems such as global poverty. This Social Enterprise Initiative, Harvard Business School working paper reviews the debates around performance and impact, drawing on three literatures: strategic philanthropy, nonprofit management, and international development. We then develop a contingency framework for measuring results, suggesting that some organizations should measure long-term impacts, while others should focus on shorter-term outputs and outcomes. In closing, we discuss the implications of our analysis for future research on performance management.
This Usable Knowledge white paper breaks down tips to hiring an evaluation consultant, thoughts about what to look for in an evaluation consultant and how to select and hire one for your next evaluation project.
The Stanford Social Innovation Review breaks down how local governments and nonprofits can work together for large-scale community change using 5 steps of shared leadership.
Ten Reasons Not to Measure Impact—and What to Do Instead, a Stanford Social Innovation Review article, simplified the task of improving data collection and analysis with a three-question test. The author emphasized that if your organization cannot answer yes to at least one of the following questions, then your organization probably should not be collecting data. 1) Can and will the (cost-effectively collected) data help manage the day-to-day operations or design decisions for your program? 2) Are the data useful for accountability, to verify that the organization is doing what it said it would do? 3) Will your organization commit to using the data and make investments in organizational structures necessary to do so?