Recently, I had the honor of posting my first contribution to Createquity’s Arts Policy Library, my response to the report “Breakthroughs in Shared Measurement and Social Impact.” In the comments section, one of the report’s authors Lalitha Vaidyanathan took the time to respond to two of the main points of my response.
The first point that Ms. Vaidyanathan responds to was my desire to see more data on the effectiveness of the shared measurement programs examined by the report. First Ms. Vaidyanathan writes:
Your observation about the relative youth of the systems investigated for the report is spot on. Success Measures and Cultural Data Project are the oldest of the systems we examined in depth and both started operations around 2005. As such their experience is limited to 3-4 years. At this stage, the effectiveness measure of these systems that is most quantifiable is the savings seen in terms of time and cost. The data supporting this is sprinkled throughout the report but in the interest of clarity, it is summarized and elaborated upon here below. In terms of increased impact as a result of using these systems, Strive (the example of an Adaptive Learning System detailed in the report), even though just two years in operation (it was launched in late 2008), has started to see positive improvements on many of the 10 community-level indicators it tracks. While this was mentioned in the report, it was not elaborated upon – I take the opportunity to do so here.
Firstly, I hope I made it clear in my first report that while I very much hungered for data on the effectiveness of the programs, I did understand the youth of the projects. In a way, my comments were less a criticism of the original report, and more a desire to see more investigation in the future along similar lines.
Secondly, Ms. Vaidyanathan is right that the best way to tackle such a short life-span is to look at the most straight-forward, short-term impact, which is the time and money saved by reducing wasteful grant-writing.
(a) Increased effectiveness in terms of time saved
Time saving resulting from the use of the Cultural Data Project system offers a good example. The system streamlines both grant application and reporting for participating arts organizations. Assuming an average grant size of $50,000 (and this is an over-estimate since Center for Effective Philanthropy (CEP) data shows this average to be true only for the largest foundations in the US), an organization with an annual budget of $500,000 would have 10 funders. Assuming grant application and reporting for each funder takes about 40 hours (this is the median data reported in CEP’s Grantee Perception Report for health foundations), that is a total of 400 hours a year. The Cultural Data system on the other hand, requires annual update of a single Data Profile – while there are 300 questions, many of these (like contact, background, description, etc) need only be entered once. The effort here would be at the most 2 weeks of work or 80 hours a year – this represents an 80% time saving for the non profit.
The numbers are ball-parked, but they seem useful enough to illustrate the time-saving. However, the numbers are still a projection, based on the following stated assumptions:
- Average grant size of $50,000 (estimate based on CEP Data)
- Organizational budget of $500,000 (hypothetical)
- Grant application and funding time of 40 hours per funder (source: CEP Grantee Perception Report)
- Cultural Data system requires 80 hours of work a year (personal projection)
There is also an unstated assumption that underpins her conclusion. The assumption is that a grantee organization that uses the Cultural Data system does not need to apply anywhere else for funding.
I compared these shared measurement systems to the Common App in my analysis of the report, and my own experience with the Common App makes me think that such an assumption is not founded. I applied for 16 schools when I was applying to college, 11 of which were on the Common App. NYU, my top choice (where I attend now), was on the Common App but required an additional supplement. There were a few other schools that were in a similar category. I also applied to a number of schools on the UC system, which had its own equivalent of the Common App (one UC application for all of the schools). In the end, I wound up filling out more applications than just the single Common App.
This is not to say that the Common App was useless. It did in fact allow me to apply to more schools in less time. I’m not using this as an argument to say that shared measurement systems are not time-savers, I simply want to point out that the 80% time saving projection strikes me as rosy, especially in the early days.
After all, in context of the relative youth of these systems, the question is how many funders within a given field have signed on to the shared measurement system. To compare to the Common App again, the Common App allows applications to 150 colleges in the United States. According to the US Census, there were 4,084 higher learning institutions in 1999. In the case of the Common App, I only needed to be accepted by one. But if I’d needed to be accepted to 10, I would have had to apply to more, and more of those colleges might have been non-Common App schools.
In the funding world, where sources are more limited and more are needed, it is important to ask how many of your funders are going to be participating in the shared measurement system. Can the organizations which participate in such systems put together their entire budget with funding acquired from participating funders? Or is the figure 80% of budget? 60% of budget? It is an important question for an organization to contemplate as it decides on whether or not to participate.
Also, remember that budget sizes are fluid, and that human beings are apt to think that more money is better. Supposing that an organization saves 80% of our time on the grants they planned to apply for. Will they spend that time on their organization? Or will they simply apply to more grants, hoping for more wins and more money?
I’m also curious about the time-saving from the perspective of the funder. Does the ease of applying for a grant lead to more applications? If so, how would the increase in applications compare to the time saved due to an easier review process?
My hunch is that, when these questions are answered, shared measurement systems are still more effective and save time. But I also think that it may not be quite as much time as we would think. Organizations who think that joining a shared measurement system entitles them to fire their grant-writing staff might be unpleasantly surprised.
(b) Increased effectiveness in terms of cost saving
The Success Measures Data System (SMDS) serves as a good example here. In the absence of outcomes data from a system like SMDS, funders would have to use external evaluators to understand the outcome of a grant. The cost formal external evaluation can run anywhere from a few tens of thousands of dollars to millions of dollars – let us assume an average cost of external evaluation of around $50,000. The SMDS annual subscription fee is $2,500 – assuming an external evaluation is conducted every 5 years – that represents a 75% cost saving.
I have an easier time believing in the cost savings, although some of the same problems from time savings also apply here. The example Ms. Vaidyanathan chose is one of the more clear-cut aspects of cost saving: being part of a system that generates outcomes data does reduce the need to bring in external evaluators to generate outcomes data. Other costs that might be reduced, such as the money that salaried full-time grant-writing staff might be harder to reduce, but this one seems a fair point.
It is important to note that both the above calculations do not capture the other important benefits realized from such systems – improved data quality and reduced need for evaluation expertise (definition/measurement of outcome indicators requires some expertise – a specialized skill set that most non profits do not have in-house). We would expect both the above benefits to result in increased programmatic impact – as noted below, due to the early stage of these systems, quantitative impact data is not yet available.
All of those are fair points. Certainly, if it turns out that the time saving is really less than 80% and the cost saving is really less than 75%, it is worth pointing out that the other important benefits are part of the cost-benefit analysis as well. Once the quantitative impact data becomes available, we’ll know how much those other benefits compare to the time and cost savings.
(c) Increase effectiveness in terms of impact
The higher level benefits of the above mentioned two systems – increased knowledge for non profits (ability to learn from higher performing peer organizations) and funders (ability to make better programmatic grant decisions) – and the resulting increase in impact of the work is not yet documented in a quantitative manner due to the early stage of development of these systems. As you suggest, we do think a follow up report in a few years that documents this will be beneficial for the field.Strive however, does provide some quantitative evidence of this point. Even though only in operation for two years, it has already seen positive improvement in all 5 of its major Goal areas (see http://www.strivetogether.org/documents/ReportCard/2009StriveReportCard.pdf for a copy of its 2009 report card). Perhaps as importantly, the report card also allows it to identify areas where the indicators are trending downwards (e.g. Goal 5 indicators for Cincinnati State Technical and Community College that are trending downwards include College Readiness, Retention in Associates Degree, College Graduation, Number of Associates Degrees Granted) and thus where additional effort would be needed in the upcoming years by those action networks. This ability to identify issues, adapt strategies based on measurement and then act on it is only possible in Adaptive Learning Systems. It is our belief, as we state in the Conclusion section of our report, that Adaptive Learning Systems hold the greatest potential of moving the field toward its ultimate goal of impacting and solving social problems.
The Strive 2009 Report Card is an excellent blueprint of an informational infrastructure. The quality and depth of information in the report is impressive, and it is presented in a manner which, although dense, is clear to follow. The potential there to unify efforts, isolate problem areas, is definitely enough.
What the Strive Report Card is not is a meta-analysis. Strive is an analysis of the outcomes in the community, but it is not an analysis of Strive. We don’t know whether money was effectively used in Strive or if it went to waste, we don’t particularly know what Strive-related projects impacted what parts of the report card. In management, that’s process maturity: having a process about your process. Strive can clearly isolate issues in the outward community, but it doesn’t yet seem like it is able to isolate issues within itself. Unless it does so in internal documents that I haven’t located, and which I’d love to be made aware of.
Again, it’s called process maturity for a reason. Strive is one of the more mature systems, but it isn’t fully matured yet. It is only three years old.
Lastly, Vaidyanathan addresses my main personal insight into the program:
2. Role of public sector
The Arts specific implication you suggest of having the public sector invest in a shared measurement infrastructure for the field is an excellent one. The original development of the Cultural Data Project in Pennsylvania did include the Pennsylvania Council on the Arts which is a public agency in the Office of the Governor of PA. There is however, much greater scope for public sector involvement in building of such infrastructure. Perhaps with the setting up of agencies such as the White House Office of Social Innovation, infrastructure efforts such as that suggested here might be more likely to happen.
The potential for the White House Office of Social Innovation and Civic Engagement (WHO-SICE) is one I briefly entertained in my own blog in a post here, where I proposed that WHO-SICE would become the patron saint of young, new arts organizations and the NEA would become a caretaker of large, old, established organizations in the traditional grant-making structure. I’m not sure if creating shared measurement systems would fall more under WHO-SICE or NEA in that dichotomy. However, if you note what furor was whipped up around the NEA when they tried to get individual artists to participate in a National Day of Service, you’ll see that any scheme in which the NEA helps the arts without being accused of influencing the artists themselves is probably a better direction for the NEA.
So, thanks to Ms. Vaidyanathan for directly responding to my analysis of “Breakthroughs in Shared Measurement.” I think we’re both basically in agreement that there’s a definite opportunity for a follow-up report three to five years from now that will be able to dive into the impact of the shared measurement systems with more depth and quantitative rigor. The youth of the programs in question prevent answering many of my questions at this time, but I appreciate the opportunity to air them and get responses.