[Note to readers: I’m very pleased to introduce to you the first Arts Policy Library entry (not to mention first Createquity post of any kind) not written by me. Guy Yedwab is a budding theater professional who first became known to me through the magic of Twitter and later through his blog, CultureFuture. Currently a senior at New York University, he has already founded a theater company and a publishing house in the midst of going to school full-time, completing various internships and odd jobs, and now, writing for Createquity. I’m excited to have Guy on board and hope you’ll give him a warm welcome. -IDM]
In “Breakthroughs in Shared Measurement and Social Impact” by FSG Social Impact Advisors, authors Mark Kramer (FSG’s co-founder), Marcie Parkhurst, and Lalitha Vaidynathan take a look at the different ways foundations and their grantees have tackled the lack of performance measurement standards in the nonprofit sector. The short 26-page report was released July 2009 and funded by the William and Flora Hewlett Foundation, one of the largest foundations in America. FSG Social Impact Advisors is a nonprofit consulting firm that helps foundations and philanthropic organizations make effective grants. It should be noted that the Hewlett Foundation participates in one of the shared measurement programs documented in the report, run by the Center for Effective Philanthropy.
SUMMARY
Kramer et al. are clear about their aims early on. The paper opens as follows:
“A surprising new breakthrough is emerging in the social sector: A handful of innovative organizations have developed web-based systems for reporting the performance, measuring the outcomes, and coordinating the efforts of hundreds or even thousands of social enterprises within a field. These nascent efforts carry implications well beyond performance measurement, foreshadowing the possibility of profound changes in the vision and effectiveness of the entire nonprofit sector.”
It is these breakthroughs that the report seeks to document, primarily through interviews with participants and summaries of the systems involved. The authors isolate three important categories of breakthroughs, each building on the last:
- Shared Measurement Platforms, which are an agreed-upon set of benchmarks developed by funding organizations and their grantees;
- Comparative Performance Systems, which build upon a Shared Measurement Platform and look for ways to compare the results between different grantee organizations; and
- Adaptive Learning Systems, which seek to leverage both of the above systems to develop strategies and coordinate resources between multiple foundations and grantees.
The report looked at twenty different efforts of varying sizes and types.
Kramer et al. first examine the need for these initiatives. Each of these systems emerged in response to the same fundamental problem: the extreme inefficiency of the grant application process. Although foundations in the same field were attempting to evaluate the same organizations, each had its own process and its own benchmarks. Grantee organizations found themselves wasting large amounts of time filling out different applications, spending significant contributed income simply on the process of acquiring more money for their operations. Meanwhile, foundations and grantee organizations were not learning from either each other or their peers. The problem was described vividly in Project Streamline‘s 2008 report “Drowning in Paperwork, Distracted from Purpose.”
In addition to the problem of overhead and waste inherent in duplicative grant applications, the foundations involved in these experiments recognized that organizations did not have professional benchmarks to judge their own success. In the private sector, companies have specific quantitative standards to assess their impact: market share, revenues, etc. But in the field of nonprofits, Kramer et al. note that foundations currently face a choice between two equally problematic ways of evaluating impact: haphazard self-reporting, or expensive third-party auditing. Foundations and their grantees have a mutual interest in systematically overcoming these difficulties.
To address these needs, a number of different and independent Shared Measurement Systems have been set up, usually by a group of large foundations, as reusable yardsticks for their applicants,. These systems typically take the form of web-based applications that allow representatives of a grantee organization to plug in data from their own organization and analyze the results. In the case of Comparative Performance Systems, the participants can compare this data against other users in their field. Adaptive Learning Systems improve on this system by using real-world, face-to-face meetings to discuss the meaning of these results with others in the field.
The report notes certain obstacles to implementing these programs, which all seem attributable to fears on the part of participants. Grantees fear the complexity of some of these systems, worry about disclosing too much internal information, and are afraid of running afoul of funding biases if they participate. Foundations are hesitant to spend money on developing such systems because the spending does not go directly towards their stated goals; although such systems might indirectly help the homeless or the environment, the impact is less immediate. There is also the “free-rider” problem: foundations might pour substantial time and effort into developing a system, only to have later foundations and grantees benefit without having paid in originally. The authors note, however, that in the nonprofit field, concern about the free-rider problem should be less prevalent, seeing as the whole point of philanthropy is to benefit others.
The report identifies eight success factors for implementing these systems:
- Strong leadership and substantial funding
- Broad engagement by many organizations
- Voluntary participation
- Web-based technology
- Independence from funders
- Ongoing staffing to support member organizations
- Testing and continual improvement
- Users periodically swapping information.
The appendix contains four case studies and detailed information about the 20 organizations examined. The organizations tackle issues as diverse as housing and economic development, cultural development, education, and environmental preservation—there is even an Adaptive Learning System for marine fisheries.
The four case studies have a number of notable elements in common. Most of these programs involved web-based data-collection and sharing and a support organization to help their members. Many are free to use, but others have subscription costs that can range into the thousands—although the authors are quick to note that the member organizations still save money over not using the system. Most of the programs were started by a few large, locally influential funders, such as the David and Lucile Packard Foundation or the Acumen Fund. The initial investment averages roughly $1.2 million (excluding the Cultural Data Project, a clear outlier at $2.3 million), and time to develop ranges from 2 to 5 years.
ANALYSIS
I found the report’s conclusions intriguing and the arguments put forward in favor of these shared measurement systems persuasive. The common-sense approach to solving problems through sharing of information and reducing overhead seems so intuitive that it’s hard to believe that these programs aren’t more widespread. My only criticism of the report is its lack of depth of examination and analysis of the implementation and effectiveness of the various programs.
The report is entirely qualitative in nature. For the most part, the authors limit themselves to descriptions of the programs they examine; there is no quantitative look at the effectiveness of any of the programs, or any analysis of the metrics used within the programs. The qualitative analysis is drawn entirely from interviews of participants in the organizations, and the interviews are all positive in nature. I wouldn’t go so far as to ascribe this to deliberate bias on behalf of the authors, but the lack of any sustained criticism from the participants or perspectives from organizations that chose not to participate may create an overly rosy picture.
The report holds that the trends described point toward a new direction for the relationship between grantees and the foundations that support them. The opportunities for reducing overhead and bringing order to the chaos of searching for foundation support are clear. In other contexts, this approach has been effective: the Common App, for instance, has made the process of applying for colleges somewhat more bearable for the thousands of students who use it.
Furthermore, there is a particular passage in the report where Kramer et al. seem to reach past the stated purpose of the programs to see an even larger picture. They note that the now-famous success of the Harlem Children’s Zone stems from the powerful coordination between all aspects of the education process. They see the potential for Adaptive Learning Systems to create the groundwork for networks of multiple organizations, as closely coordinated as the Harlem Children’s Zone, working in tandem to accomplish the same goals in other contexts.
Kramer et al. already see this happening in one example, the Strive Initiative. Strive is a large-scale partnership in the Greater Cincinnati area that brings together three public school districts, one diocesan district, eight universities/community colleges, and hundreds of other education nonprofits. Strive sets forward a series of Community-level Progress Indicators, measuring what percentage of children are performing adequately in each year. Furthermore, educational nonprofits using similar approaches to improve these progress indicators collaborate in Student Success Networks (SSN), such as the Tutoring SSN that encapsulates school districts, tutoring organizations, and the Cincinnati Metropolitan Housing Authority. What began as a shared measurement system (using the Community-level Progress Indicators) developed into an adaptive learning system (the SSNs).
In a way, the report’s argument for the impact of these measurements reminds me of Richard Florida‘s statement that the culture of the creative class could transform the effectiveness of manufacturing and service industries; here, it seems that the Organization Man has something positive to give to the creative class.
Despite the clear logic and potential of all these elements, the report offers little examination of the impact of these programs, and particularly little examination of the actual systems of measurement themselves. Examples of questionnaires and metrics are provided, but the lack of critical analysis of their effectiveness and use makes it difficult to evaluate their impact. The only empirical evidence put forward is a circumstantial increase in funding for the arts in Philadelphia, possibly as a result of the organization and empirical benchmarks put forward by the Cultural Data Project, but this is only one data point and the causation isn’t very clear.
In fairness, there are a number of reasons why the report does not attempt that level of depth. Most of the projects examined are less than a decade in operation, and many of them have not yet been fully implemented. The “breakthroughs” hailed by Kramer et al. are still quite in their infancy, and it may be difficult to judge their full impact at this time. Still, an attempt to analyze the metrics used by the different organizations would have been very helpful in understanding how they operate.
Perhaps the goals of this report should be revisited in the next decade, with a more detailed and quantitative analysis of the effects of these programs. A future report could create more specific guidelines for how to create new successful systems, and share the lessons learned by the early trailblazers. This could have the added effect of lowering the development time and cost of new systems, as well as building an even stronger case for adoption and expansion.
IMPLICATIONS
The implications of this report point the way toward an excellent new strategy for approaching philanthropy, and by extension arts and culture. Although most of the organizations examined were not arts organizations, it is clear that the lessons translate across boundaries from anti-poverty groups to performing arts. The Cultural Data Project, specifically, looks to be the vehicle for this change.
There is, however, an arts-specific implication overlooked by the authors of the report, and this is the aspect that most interested me. The report focuses largely on the current structure of these shared measurement organizations, namely led by large private foundations. It does not investigate the possibility of public sector involvement. For instance, the report discusses the reluctance of foundations to embark on this sort of indirect investment. This weakness, however, becomes a strength when applied to the public sector. One of the political barriers to the debate in favor of arts funding has been a reluctance (to put it mildly) of public officials to risk funding artists and arts organizations directly, especially at the national level. It is exceptionally easy to attack the arts by attacking a few examples of controversial organizations, and there will always be a toxic debate surrounding which organizations to support and by how much.
The power of this report is that it puts forward another strategy of supporting philanthropy (and in specific the arts), one which avoids the problem of directly supporting organizations. Instead, it proposes an informational infrastructure for the arts.
In this age, we’ve come to recognize that knowledge and data have the particular power to maximize the impact of any organization. The government already provides its greatest support for the arts in infrastructural investments that are equally available to all arts organizations—the institutions of copyright and nonprofit tax credits are, from one perspective, infrastructural investments in the arts. The National Endowment of the Arts could appropriate a fraction of its budget to create national standards for data sharing in the arts, eventually creating adaptive learning systems for the arts.
The political advantage of this approach is that it avoids the picking and choosing of organizations that touch? on sensitive issues of censorship, ideology, and artistic merit. Foundations are better places to support individual organizations and artists—this way, the government’s investment will facilitate the ability of foundations to support those artists and organizations.
To a large extent, this is the federal government’s approach to stimulating industry. It would be very difficult to measure the impact of the Bureau of Labor Statistics on the national economy, but no one would deny that for a relatively modest cost it provides huge benefits to the world of labor. So it could be for the arts.
Consider, for a moment, that with an investment of $2.3 million, such an infrastructure is being established in seven states in the form of the Cultural Data Project. Suppose that the cost per state of this is fixed; it would thus take $16.4 million to cover all 50 states (this is a very rough approximation, just for the current point I’m making). This is still only a portion of the $50 million in stimulus money appropriated for the arts, and I suspect that the cost would likely be less than that. In other words, for a very plausible amount of money, the National Endowment of the Arts could invest in a project whose benefits would be accessible to every arts organization.
This report is an important first step in building such an approach. Although more work remains to be done, the report provides a clear method of cutting overhead, sharing information, and eventually building a strategy for coordinating the arts.
[UPDATE: Don’t miss Lalitha Vaidyanathan’s (one of the report’s co-authors) lengthy comment below and Guy Yedwab’s response.]