Common Results Catalog
The GuideStar Common Results Catalog allows organizations to measure progress and results. Since the metrics your organization shares are your choice, they should reflect what you already collect and use. To help you think about them, the Common Results Catalog was created. This catalog contains all of the metrics currently in our database—by subject area—developed in consultation with teams of experts. Browse the catalog to see what metrics make sense for your organization. If you don’t find a metric that fits, you can add a custom metric.
Hiring an Evaluation Consultant
This Usable Knowledge white paper breaks down tips to hiring an evaluation consultant, thoughts about what to look for in an evaluation consultant and how to select and hire one for your next evaluation project.
Effective Partnerships
The Stanford Social Innovation Review breaks down how local governments and nonprofits can work together for large-scale community change using 5 steps of shared leadership.
The Generalizability Puzzle
The Stanford Social Innovation Review's Generalizability Puzzle is a paper that recognizes that any practical policy question must be broken into parts. Some parts of the problem will be answered with local institutional knowledge and descriptive data, and some will be answered with evidence from impact evaluations in other contexts. The generalizability framework set out in this paper provides a practical approach for combining evidence of different kinds to assess whether a given policy will likely work in a new context. If researchers and policy makers continue to view results of impact evaluations as a black box and fail to focus on mechanisms, the movement toward evidence-based policy making will fall far short of its potential for improving people’s lives.
Ten Reasons Not to Measure Impact—and What to Do Instead
Ten Reasons Not to Measure Impact—and What to Do Instead, a Stanford Social Innovation Review article, simplified the task of improving data collection and analysis with a three-question test. The author emphasized that if your organization cannot answer yes to at least one of the following questions, then your organization probably should not be collecting data. 1) Can and will the (cost-effectively collected) data help manage the day-to-day operations or design decisions for your program? 2) Are the data useful for accountability, to verify that the organization is doing what it said it would do? 3) Will your organization commit to using the data and make investments in organizational structures necessary to do so?