<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Createquity.Createquity.</title>
	<atom:link href="https://createquity.com/author/guyyedwab/feed/" rel="self" type="application/rss+xml" />
	<link>https://createquity.com</link>
	<description>The most important issues in the arts...and what we can do about them.</description>
	<lastBuildDate>Wed, 15 Jul 2020 20:17:39 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Arts Policy Library: The Search for Shining Eyes</title>
		<link>https://createquity.com/2010/02/arts-policy-library-the-search-for-shining-eyes/</link>
		<comments>https://createquity.com/2010/02/arts-policy-library-the-search-for-shining-eyes/#comments</comments>
		<pubDate>Wed, 10 Feb 2010 21:03:39 +0000</pubDate>
		<dc:creator><![CDATA[Guy Yedwab]]></dc:creator>
				<category><![CDATA[Philanthropy]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[arts policy library]]></category>

		<guid isPermaLink="false">https://createquity.com/?p=1177</guid>
		<description><![CDATA[In the wake of the 1990 recession, the John S. and James L. Knight Foundation embarked on a historic program in an attempt to revolutionize classical music in the United States. The Magic of Music Symphony Orchestra Initiative lasted from 1994 to 2004 and aimed to transform the audience’s experience of music in the concert<a href="https://createquity.com/2010/02/arts-policy-library-the-search-for-shining-eyes/" class="read-more">Read&#160;More</a>]]></description>
				<content:encoded><![CDATA[<p><img decoding="async" class="aligncenter" src="http://www.knightfoundation.org/media/uploads/publication_images/2006_magic_of_music_final_report_cover.jpg.221x0_q85_crop.jpg" alt="" />In the wake of the 1990 recession, the John S. and James L. Knight Foundation embarked on a historic program in an attempt to revolutionize classical music in the United States. The Magic of Music Symphony Orchestra Initiative lasted from 1994 to 2004 and aimed to transform the audience’s experience of music in the concert hall. At the end of the project, the Knight Foundation commissioned Dr. Thomas Wolf to document the impact of the program and the lessons learned. The 59-page report, “<a href="http://www.wolfbrown.com/images/books/shiningeyes.pdf">The Search for Shining Eyes</a>,” was released in September 2006 and tells the story of The Magic of Music.</p>
<p><strong>SUMMARY</strong></p>
<p>&nbsp;</p>
<p>Wolf begins by tracing the background of the Magic of Music Initiative to the 1990 recession. The Knight Foundation’s president at the time, Creed Black, felt that the Foundation was constantly inundated with pleas for help from struggling orchestras in the 26 communities it serves. Typically, the Knight Foundation’s efforts to save struggling orchestras were short-term stopgap measures, failing to address the real problems facing those orchestras.</p>
<p>In this climate of crisis, Black came across a speech made by Oberlin College President Frederick Starr in front of the American Symphony Orchestra League in 1988.  In the speech, Starr stated that the real reason for the decline in classical music’s fortunes was that <strong>orchestras had forgotten how to make compelling artistic experiences that connect with audiences</strong>. According to Starr, “Americans all too rarely get an opportunity to take pleasure in classical music. Instead they are being separated from it by a wall of grim convention and self-conscious ritual having nothing to do with the music itself.” Starr proposed that the secret to turning around orchestras’ fortunes was to focus on revolutionizing the audience’s experience of the music.</p>
<p>Black eventually spoke with Starr, and decided to attempt to put Starr’s ideas into practice. Along with the Knight Foundation’s new arts program director, Penelope McPhee, Black and Starr set about creating a new model for foundational involvement in orchestral programming. Rather than simply providing specific orchestras with short term general operating support, they sought to <strong>provide a group of orchestras with grants to develop dynamic programs to change the audience experience.</strong></p>
<p>&nbsp;</p>
<p>Conductor Benjamin Zander, who served on the Magic of Music’s advisory committee during the design phase of the program, articulated its core philosophy when he said: “When I look out and see all those ‘shining eyes’ in the audience, I know we’ve created the magic.” This quote is the inspiration for the name of the program, the Magic of Music.</p>
<p>For the first phase of the initiative, the Knight Foundation sent invitations to orchestras in its priority communities to invite them to apply for planning grants for new approaches to their programming. Unfortunately, this first round of invitations met with disappointing results. Knight Foundation program staff had put as few guidelines in place as possible, thinking that this would stimulate creativity and out-of-the-box thinking. Because the invitations were sent during the summer, however, when key figures at the orchestras were away, orchestra development departments simply applied their usual grant application techniques. The result was a series of uninspiring proposals.</p>
<p>Rather than funding any of the proposals, the Knight Foundation decided to try soliciting applications again, expanding their reach outside of the original 26 communities. The second set of invitations was more prescriptive, requiring explanations of how all parts of the organizations would be involved, and that representatives from each department would attend regular summits.</p>
<p>The second attempt was more fruitful: of 25 projects proposed, 12 were funded by the Magic of Music Initiative. According to Wolf, the proposals tended to fall into five categories:</p>
<ul>
<li>Technological enhancements (such as video monitors)</li>
<li>Technological approaches to finding new audiences</li>
<li>Audience development programs</li>
<li>Changes in musical content and format</li>
<li>Educational materials and programs</li>
</ul>
<p>All of the proposals were designed in accordance with the “shining eyes” theory that transforming the audience experience was key to lifting orchestras’ fortunes.</p>
<p>Over the next three years, those programs were put into action. Some orchestras found success—the Oregon Symphony Orchestra, for instance, created a new concert series called “Nerve Endings” that had a sell-out run and was witnessed by 2,700 people, 38% of whom were under the age of 40. Other orchestras saw much worse results: the Philadelphia Orchestra received such negative audience reactions to its video screens that the program was scrapped early, and the money was redirected instead towards improving internal communication issues in the wake of a bitter labor dispute between the musicians and the board. <strong>Most orchestras were in the middle, unveiling programs that did not seem to affect their audiences one way or another.</strong></p>
<p>At the end of the three years, the Knight Foundation attempted to evaluate the program as it had elapsed so far, before embarking on the next phase. Unfortunately, the evaluation process also ran into unexpected complications. The Bay Group, a San Francisco consulting firm that was only brought in after the projects were done, found it difficult to gather any useful information. Goals had been poorly articulated, little pertinent data had been collected or preserved over the course of the first phase, and the data wasn’t consistent across the projects.</p>
<p>However, important lessons were already clear. The “shining eyes” theory was not the full story. As Wolf puts it:</p>
<blockquote><p>Over time, the idea that changing the concert hall experience in and of itself was the key to reinvigorating any individual orchestra or the orchestra industry turned out to be naive. Too many other factors within the field, and more broadly in the larger society, were in play. But it remained useful as a starting point and it remains one of the hallmarks of Knight’s accomplishment.</p></blockquote>
<p>The Knight Foundation decided to rethink its objectives for the second phase of the initiative. Although it continued to focus on transforming the experience inside the concert hall, the Magic of Music program would now also focus on <strong>generating a demonstrable increase in ticket-buying</strong>, and a clearer understanding of market dynamics.</p>
<p>Before Phase II began, the Knight Foundation commissioned what Dr. Wolf calls “the largest discipline-specific study of arts consumers” from Audience Insight LLC (whose president, Alan Brown, would eventually join with Wolf to form WolfBrown). The report, “How Americans Relate to Classical Music and their Local Orchestras,” was full of startling information that would become the basis for Phase II.</p>
<p>The report found that:</p>
<ul>
<li>60% of adults listen to classical music, and 33% fit it into their lives on a regular basis. 27% were considered to be “orchestra prospects,” meaning that they had enough interest in classical music to be included in a potential audience for the orchestra. This population included prospects who were involved with orchestras, those who had been previously involved but were not involved anymore, and those who might be converted into audience members but had never been previously reached.</li>
<li>Of those interested in classical music, 50% listened to it on the radio at least several times a month, and almost 75% owned one or more classical CD. The average owned 16.</li>
<li>Concert halls ranked as one of the least popular places to listen to classical music; the top ranked place was in the car, and the second was at home.</li>
<li>Only 6% of listeners considered themselves “very knowledgeable;” 50% considered themselves “not very knowledgeable.”</li>
<li>12% had never heard of their local symphony, and less than 5% of those interested in classical music had actually patronized their local symphony. 40% of those that had attended a concert had never purchased a ticket. Only 8% of those who had patronized were subscribers.</li>
<li>74% of those interested in classical music had played an instrument or sung in a chorus.</li>
</ul>
<p>This report found that if only 10-20% of those who were “very interested” in classical music attended their local orchestra, <strong>the average orchestra’s attendance would double in size. </strong></p>
<p>&nbsp;</p>
<p>The impact of this report was decisive. Previously, conventional wisdom had believed that orchestras were in decline because fewer people cared about classical music. The report suggested that <strong>the problem was not classical music itself, but rather its delivery system</strong>. As Wolf puts it, “No longer was the challenge how to get more people to buy tickets for the existing product provided in the same old way, and in the same old place.”</p>
<p>Buoyed by these findings, the Knight Foundation accepted thirteen orchestras into the next phase of the initiative. This time, the orchestras were organized into three consortia whose members would work together on similar projects, exchanging information and providing support. The consortia’s members would design their programs together, and would also collaborate on standard methods of evaluation.</p>
<p>The three consortia projects were to</p>
<ul>
<li>Involve orchestra members in outreach with audiences,</li>
<li>Generate educational programs and technological aids, and</li>
<li>Engage in audience development.</li>
</ul>
<p>For the next five years the three consortia projects were put into practice. Although in Phase II there were no failures to the degree of some of the Phase I orchestras, improvements were only modest. Some technological advances were highly successful: Wolf describes the ORBIT software developed to encourage ticket buyers to organize outings with friends as “one of the few aspects of the initiative that had long-lasting impact,” and the Concert Companion (a handheld device to accompany concerts with explanatory text, program notes, and video images) as the “most potentially transformational.” Other innovations succeeded in attracting new audiences, but few demonstrated the ability to translate new audiences into substantially increased ticket sales.</p>
<p>Once the five years of Phase II were complete, the Magic of Music Initiative had finally ended its decade-long experiment. Wolf sums up its legacy by saying that the projects were “intended to be transformational in nature—[but] most were only marginally so, and some of the most significant outcomes were only indirectly associated with the projects that were initially funded.”</p>
<p>For Wolf, the most lasting impact of the program was the knowledge generated by the Foundation. There were lessons to be learned about the organization of orchestras, about the classical music audience, and about the nature of funding music in an effective way.</p>
<p>The lessons for funders included:</p>
<ul>
<li>The amount of investment involved needs to match the scale of the desired change,</li>
<li>The need to be clear with themselves and their grantees about desired outcomes – including the metrics by which the outcomes will be measured,</li>
<li>Important innovations and changes cannot happen in an organization which is in the midst of a financial crisis, and</li>
<li>Unintended results can be equally as significant as the desired outcomes, and funders need to be ready to support those unintended results.</li>
</ul>
<p>The lessons for orchestras included:</p>
<ul>
<li>The problem with classical orchestras is not the music they play but the delivery systems they employ,</li>
<li>Orchestras that are not relevant to their communities will be in increasingly endangered,</li>
<li>All of the members of the orchestra family must be involved in changes – the music director, musicians, administration, volunteer leadership, and trustees,</li>
<li>Free programming does not tend to create new ticket-buying audiences,</li>
<li>Audience education tends to serve the existing audience, not new audience members, and</li>
<li>Participatory arts education programs are a much more linked to future audience members than expository arts programs.</li>
</ul>
<p>These are the lessons that each of the orchestras who participated in the initiative learned for themselves, and according to Wolf they are the lessons that future funding initiatives should take to heart before embarking on a large-scale project of change in the arts.</p>
<p>&nbsp;</p>
<p><strong>ANALYSIS</strong></p>
<p>“The Search For Shining Eyes” is an extremely insightful resource that accomplishes the author’s goal “to produce something for a broad readership, not simply to add to the shelves of foundation archives.”</p>
<p>The insights gained from the initiative range from the obvious (financially troubled orchestras have trouble innovating) to the counter-intuitive (audience education favors existing audiences, rather than new audiences). The information generated by Audience Insight’s consumer study was particularly enlightening; the data provides a solid case for optimism and clarifies the true challenges and opportunities for classical music.</p>
<p>With that said, however, I felt that the report sometimes failed to confront the contradictory goals and accomplishments of the separate projects. For example, the report identifies the primary goal of the initiative&#8217;s second phase as a demonstrable increase in ticket-buying. The innovation cited as being the “the most potentially transformational” in this phase was the Concert Companion, yet there is no data cited to show that the Concert Companion impacted ticket sales in any meaningful way. In what way, then, does the device have the opportunity to transform classical music?</p>
<p>The report hints at positive benefits from the more qualitative aspects of the projects, but sometimes neglects to elucidate what those other benefits might be. For instance, in response to the Charlotte Symphony&#8217;s attempts to attract new communities, the author says, “Like many orchestras, Charlotte discovered that there are many benefits to community engagement activities but that creating ticket buyers is not one of them.”</p>
<p>The report implies that there is something equally compelling about these “side” benefits, despite their lack of translation into sales. For instance, in discussing the Brooklyn Philharmonic, the author writes,</p>
<p>“Perhaps as much as any orchestra, Brooklyn was able to shift its focus from the concert hall to the community&#8230; new audiences increased, though this did not convert into substantial new ticket buyers in the concert hall. By the end of the project, the expansion of audiences in the community came to be an organizational goal for its own sake, not as a tool for increased ticket sales.”</p>
<p>The Brooklyn Philharmonic may have achieved success, but it was the success of a different objective than that put forth by the Magic of Music Initiative, namely increasing ticket sales.</p>
<p>This tension gets at a problem that was present in the Magic of Music Initiative from early on. If, as the author states, the “shining eyes” theory is naïve and that reforming the performance experience does not translate into more substantial ticket sales, which should orchestras and funders pursue?</p>
<p>The report gives weight to both sides of the issue. Many of the programs that the report views as successful, such as the structural reforms of the St. Paul Chamber Orchestra or the Concert Companion, are in pursuit of intangible benefits unrelated to ticket sales. On the other hand, the report clearly asserts that orchestras in financial crisis can&#8217;t innovate, and the financial crises of orchestras come directly from their declining ticket circulations. <strong>Furthermore, most of these “intangible” benefits did not have any effect on the trend of declining audiences—which, inevitably it would seem, will doom even the most innovative orchestras to eventual collapse.</strong></p>
<p>&nbsp;</p>
<p>What would have been helpful would have been for the report to return back to the original impetus for the Magic of Music Initiative and compare the different aspects of success to the original goals set forth by Creed Black: the long term stability of orchestras. Will the community performances of the Brooklyn Philharmonic at any point translate into a more stable, self-sufficient financial future? Will an orchestra that shifts its focus from increased ticket sales to community engagement activities be any more stable than an orchestra that continues to pursue the status quo?</p>
<p>The report does not provide a clear answer to that question, and therefore it is difficult to truly gauge the impact of the Magic of Music Initiative.</p>
<p>&nbsp;</p>
<p><strong>IMPLICATIONS</strong></p>
<p>I was rather surprised by the apparent lack of discussion of this report among the greater arts community. My search of both the general internet and a wide range of specific publications didn’t yield any results from writers who were engaging critically with the material.</p>
<p>However, I did find a broad set of responses to the segmented study of classical audiences, “How Americans Relate to Classical Music and their Local Orchestras.” The study changed the mindset of many in the classical music field to change their approaches to developing new audiences. The study’s approach would be helpful in other disciplines as well. For instance, a cursory search for segmented consumer studies of theater audiences yielded only a few results, all of which focused on the existing theater audience, not the consumers who could be buying tickets but aren’t.</p>
<p>The most significant lesson from that study was that the repeated cry that Americans simply can’t be made to care about classical music is wrong. This cry, however, is not limited to classical music—just this past June, the National Endowment of the Arts reported that arts audiences are <a href="http://latimesblogs.latimes.com/culturemonster/2009/06/nea-reports-decline-in-arts-audiences-for-2008.html">declining across the board</a>, whether it’s museum-goers, opera lovers, or live theater attendees. Knowing that the potential audience is not as small as once believed, and that the situation is not as bleak as some would imagine, we can commit ourselves anew to finding solutions to the problem.</p>
<p>Although some of the circumstances described in the report are unique to the orchestra community, such as the distant music director or the sharp labor disputes between musicians, a number of the lessons can apply to any of the varying arts disciplines.</p>
<p>For instance, when Wolf writes that orchestras are “misusing scarce funds by spending too much to please a shrinking subscriber base and not enough to attract new audiences who may hold broader definitions of classical music,” I find it hard not to think of some of the large non-profit theaters in New York City. Recently, I was talking with a friend of mine who works as a literary intern for an established theater whose mission is to support the work of emerging playwrights. But I was discouraged from submitting one of my more recent plays because they didn’t like plays with non-traditional play structures, “like Beckett.” Another playwright I met recently, who had a play go up on Broadway to broad acclaim, spoke about a commission he received from a regional theater, and said with disappointment that they want this one to be about “real people saying real things in real situations.” While the report shows that not all departures from traditional views on the arts will be successful or gain new audiences, the resistance to mixing more contemporary forms with the more traditional structures limits the ability of orchestras to bring in new audiences.</p>
<p>The report also underlines the need for information sharing amongst funders and grantees, which falls in line with the last report I profiled, <a href="https://createquity.com/2009/10/arts-policy-library-breakthroughs-in-shared-measurement.html">FSG’s </a><a href="https://createquity.com/2009/10/arts-policy-library-breakthroughs-in-shared-measurement.html">Breakthroughs in Shared Measurement</a>. Especially in the second phase of the Magic of Music Initiative, the Knight Foundation intuited the need to integrate measurement processes and information sharing with the projects as they occurred, rather than attempting to tack them on at the end.</p>
<p>In fact, the Magic of Music Initiative’s second phase qualifies as an attempt at an Adaptive Learning System. The orchestras were grouped together, shared common metrics developed by a third party (paid for by the Knight Foundation), and they met regularly to evaluate their progress toward similar goals. Contrasting the first and second phases of the Magic of Music Initiative provide a simple lesson in the difference between funding without an Adaptive Learning System and funding with one.</p>
<p>For example, in Phase I, orchestras did not learn lessons from one another until the conclusion of the project, and largely were not involved in each other’s projects. In Phase II, the collaboration between orchestras became crucial.  Ideas generated in the consortia group were developed by a larger group of creative minds, and tested with different audiences. For instance, the successful ORBIT software was developed between three of the orchestras, allowing them to spread the cost of development and gather data on its use among all three orchestras’ audiences. When it was time to evaluate Phase II, the use of shared measurements made it easier for the report’s authors to assess the impact of the funding on the orchestras.</p>
<p>In the end, the report documents an attempt to revolutionize the classical music landscape through a diverse arsenal of tactics, such as using technological advances, employing new promotional techniques, sharing information, and revitalizing the concert experience. Although none of these techniques made significant strides toward the original goal of increasing audiences and ticket sales, the report does make clear that there is a large potential audience that can be bridged with the right distribution methods. Some tactics (such as the Concert Companion) are effective at reaching out to the casual audience, and others (such as the audience education programs) appeal only to the hardcore subscribers. Orchestras and their funders have to think carefully to match their tactics to their desired outcomes, and this report provides a good starting point for making those decisions.</p>
]]></content:encoded>
			<wfw:commentRss>https://createquity.com/2010/02/arts-policy-library-the-search-for-shining-eyes/feed/</wfw:commentRss>
		<slash:comments>8</slash:comments>
		</item>
		<item>
		<title>Response to Arts Policy Library: Breakthroughs in Shared Measurement</title>
		<link>https://createquity.com/2009/10/response-to-arts-policy-library-breakthroughs-in-shared-measurement/</link>
		<comments>https://createquity.com/2009/10/response-to-arts-policy-library-breakthroughs-in-shared-measurement/#respond</comments>
		<pubDate>Sat, 17 Oct 2009 06:38:03 +0000</pubDate>
		<dc:creator><![CDATA[Guy Yedwab]]></dc:creator>
				<category><![CDATA[Philanthropy]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[arts policy library]]></category>

		<guid isPermaLink="false">https://createquity.com/?p=818</guid>
		<description><![CDATA[Recently, I had the honor of posting my first contribution to Createquity&#8217;s Arts Policy Library, my response to the report “Breakthroughs in Shared Measurement and Social Impact.” In the comments section, one of the report&#8217;s authors Lalitha Vaidyanathan took the time to respond to two of the main points of my response. The first point<a href="https://createquity.com/2009/10/response-to-arts-policy-library-breakthroughs-in-shared-measurement/" class="read-more">Read&#160;More</a>]]></description>
				<content:encoded><![CDATA[<p>Recently, I had the honor of posting <a href="https://createquity.com/2009/10/arts-policy-library-breakthroughs-in-shared-measurement.html">my first contribution to Createquity&#8217;s Arts Policy Library</a>, my response to the report “<a href="http://www.fsg-impact.org/ideas/item/breakthroughs_in_measurement.html">Breakthroughs in Shared Measurement and Social Impact</a>.” In the comments section, one of the report&#8217;s authors Lalitha Vaidyanathan took the time to respond to two of the main points of my response.</p>
<p>The first point that Ms. Vaidyanathan responds to was my desire to see more data on the effectiveness of the shared measurement programs examined by the report. First Ms. Vaidyanathan writes:</p>
<blockquote><p>Your observation about the relative youth of the systems investigated for the report is spot on. Success Measures and Cultural Data Project are the oldest of the systems we examined in depth and both started operations around 2005. As such their experience is limited to 3-4 years. At this stage, the effectiveness measure of these systems that is most quantifiable is the savings seen in terms of time and cost. The data supporting this is sprinkled throughout the report but in the interest of clarity, it is summarized and elaborated upon here below. In terms of increased impact as a result of using these systems, Strive (the example of an Adaptive Learning System detailed in the report), even though just two years in operation (it was launched in late 2008), has started to see positive improvements on many of the 10 community-level indicators it tracks. While this was mentioned in the report, it was not elaborated upon – I take the opportunity to do so here.</p></blockquote>
<p>Firstly, I hope I made it clear in my first report that while I very much hungered for data on the effectiveness of the programs, I did understand the youth of the projects. In a way, my comments were less a criticism of the original report, and more a desire to see more investigation in the future along similar lines.</p>
<p>Secondly, Ms. Vaidyanathan is right that the best way to tackle such a short life-span is to look at the most straight-forward, short-term impact, which is the time and money saved by reducing wasteful grant-writing.</p>
<blockquote><p>(a) Increased effectiveness in terms of time saved<br />
Time saving resulting from the use of the Cultural Data Project system offers a good example. The system streamlines both grant application and reporting for participating arts organizations. Assuming an average grant size of $50,000 (and this is an over-estimate since Center for Effective Philanthropy (CEP) data shows this average to be true only for the largest foundations in the US), an organization with an annual budget of $500,000 would have 10 funders. Assuming grant application and reporting for each funder takes about 40 hours (this is the median data reported in CEP’s Grantee Perception Report for health foundations), that is a total of 400 hours a year. The Cultural Data system on the other hand, requires annual update of a single Data Profile – while there are 300 questions, many of these (like contact, background, description, etc) need only be entered once. The effort here would be at the most 2 weeks of work or 80 hours a year – this represents an 80% time saving for the non profit.</p></blockquote>
<p>The numbers are ball-parked, but they seem useful enough to illustrate the time-saving. However, the numbers are still a projection, based on the following stated assumptions:</p>
<ul>
<li>Average grant size of $50,000 (estimate based on CEP Data)</li>
<li>Organizational budget of $500,000 (hypothetical)</li>
<li>Grant application and funding time of 40 hours per funder (source: CEP Grantee Perception Report)</li>
<li>Cultural Data system requires 80 hours of work a year (personal projection)</li>
</ul>
<p>There is also an unstated assumption that underpins her conclusion. The assumption is that a grantee organization that uses the Cultural Data system does not need to apply anywhere else for funding.</p>
<p>I compared these shared measurement systems to the <a href="https://www.commonapp.org/CommonApp/default.aspx">Common App</a> in my analysis of the report, and my own experience with the Common App makes me think that such an assumption is not founded. I applied for 16 schools when I was applying to college, 11 of which were on the Common App. NYU, my top choice (where I attend now), was on the Common App but required an additional supplement. There were a few other schools that were in a similar category. I also applied to a number of schools on the UC system, which had its own equivalent of the Common App (one UC application for all of the schools). In the end, I wound up filling out more applications than just the single Common App.</p>
<p>This is not to say that the Common App was useless. It did in fact allow me to apply to more schools in less time. I&#8217;m not using this as an argument to say that shared measurement systems are not time-savers, I simply want to point out that the 80% time saving projection strikes me as rosy, especially in the early days.</p>
<p>After all, in context of the relative youth of these systems, the question is how many funders within a given field have signed on to the shared measurement system. To compare to the Common App again, the Common App allows applications to 150 colleges in the United States. According to the <a href="http://www.census.gov/prod/2003pubs/02statab/educ.pdf">US Census</a>, there were 4,084 higher learning institutions in 1999. In the case of the Common App, I only needed to be accepted by one. But if I&#8217;d needed to be accepted to 10, I would have had to apply to more, and more of those colleges might have been non-Common App schools.</p>
<p>In the funding world, where sources are more limited and more are needed, it is important to ask how many of your funders are going to be participating in the shared measurement system. Can the organizations which participate in such systems put together their entire budget with funding acquired from participating funders? Or is the figure 80% of budget? 60% of budget? It is an important question for an organization to contemplate as it decides on whether or not to participate.</p>
<p>Also, remember that budget sizes are fluid, and that human beings are apt to think that more money is better. Supposing that an organization saves 80% of our time on the grants they planned to apply for. Will they spend that time on their organization? Or will they simply apply to more grants, hoping for more wins and more money?</p>
<p>I&#8217;m also curious about the time-saving from the perspective of the funder. Does the ease of applying for a grant lead to more applications? If so, how would the increase in applications compare to the time saved due to an easier review process?</p>
<p>My hunch is that, when these questions are answered, shared measurement systems are still more effective and save time. But I also think that it may not be quite as much time as we would think. Organizations who think that joining a shared measurement system entitles them to fire their grant-writing staff might be unpleasantly surprised.</p>
<blockquote><p>(b) Increased effectiveness in terms of cost saving<br />
The Success Measures Data System (SMDS) serves as a good example here. In the absence of outcomes data from a system like SMDS, funders would have to use external evaluators to understand the outcome of a grant. The cost formal external evaluation can run anywhere from a few tens of thousands of dollars to millions of dollars – let us assume an average cost of external evaluation of around $50,000. The SMDS annual subscription fee is $2,500 – assuming an external evaluation is conducted every 5 years – that represents a 75% cost saving.</p></blockquote>
<p>I have an easier time believing in the cost savings, although some of the same problems from time savings also apply here. The example Ms. Vaidyanathan chose is one of the more clear-cut aspects of cost saving: being part of a system that generates outcomes data does reduce the need to bring in external evaluators to generate outcomes data. Other costs that might be reduced, such as the money that salaried full-time grant-writing staff might be harder to reduce, but this one seems a fair point.</p>
<blockquote><p>It is important to note that both the above calculations do not capture the other important benefits realized from such systems – improved data quality  and reduced need for evaluation expertise (definition/measurement of outcome indicators requires some expertise – a specialized skill set that most non profits do not have in-house). We would expect both the above benefits to result in increased programmatic impact – as noted below, due to the early stage of these systems, quantitative impact data is not yet available.</p></blockquote>
<p>All of those are fair points. Certainly, if it turns out that the time saving is really less than 80% and the cost saving is really less than 75%, it is worth pointing out that the other important benefits are part of the cost-benefit analysis as well. Once the quantitative impact data becomes available, we&#8217;ll know how much those other benefits compare to the time and cost savings.</p>
<blockquote><p>(c) Increase effectiveness in terms of impact<br />
The higher level benefits of the above mentioned two systems – increased knowledge for non profits (ability to learn from higher performing peer organizations) and funders (ability to make better programmatic grant decisions) – and the resulting increase in impact of the work is not yet documented in a quantitative manner due to the early stage of development of these systems. As you suggest, we do think a follow up report in a few years that documents this will be beneficial for the field.</p>
<p>Strive however, does provide some quantitative evidence of this point. Even though only in operation for two years, it has already seen positive improvement in all 5 of its major Goal areas (see http://www.strivetogether.org/documents/ReportCard/2009StriveReportCard.pdf  for a copy of its 2009 report card). Perhaps as importantly, the report card also allows it to identify areas where the indicators are trending downwards (e.g. Goal 5 indicators for Cincinnati State Technical and Community College that are trending downwards include College Readiness, Retention in Associates Degree, College Graduation, Number of Associates Degrees Granted) and thus where additional effort would be needed in the upcoming years by those action networks. This ability to identify issues, adapt strategies based on measurement and then act on it is only possible in Adaptive Learning Systems. It is our belief, as we state in the Conclusion section of our report, that Adaptive Learning Systems hold the greatest potential of moving the field toward its ultimate goal of impacting and solving social problems.</p></blockquote>
<p>The <a href="http://www.strivetogether.org/documents/ReportCard/2009StriveReportCard.pdf">Strive 2009 Report Card</a> is an excellent blueprint of an informational infrastructure. The quality and depth of information in the report is impressive, and it is presented in a manner which, although dense, is clear to follow. The potential there to unify efforts, isolate problem areas, is definitely enough.</p>
<p>What the Strive Report Card is not is a meta-analysis. Strive is an analysis of the outcomes in the community, but it is not an analysis of Strive. We don&#8217;t know whether money was effectively used in Strive or if it went to waste, we don&#8217;t particularly know what Strive-related projects impacted what parts of the report card. In management, that&#8217;s process maturity: having a process about your process. Strive can clearly isolate issues in the outward community, but it doesn&#8217;t yet seem like it is able to isolate issues within itself. Unless it does so in internal documents that I haven&#8217;t located, and which I&#8217;d love to be made aware of.</p>
<p>Again, it&#8217;s called process maturity for a reason. Strive is one of the more mature systems, but it isn&#8217;t fully matured yet. It is only three years old.</p>
<p>Lastly, Vaidyanathan addresses my main personal insight into the program:</p>
<blockquote><p>2. Role of public sector<br />
The Arts specific implication you suggest of having the public sector invest in a shared measurement infrastructure for the field is an excellent one. The original development of the Cultural Data Project in Pennsylvania did include the Pennsylvania Council on the Arts which is a public agency in the Office of the Governor of PA. There is however, much greater scope for public sector involvement in building of such infrastructure. Perhaps with the setting up of agencies such as the White House Office of Social Innovation, infrastructure efforts such as that suggested here might be more likely to happen.</p></blockquote>
<p>The potential for the White House Office of Social Innovation and Civic Engagement (WHO-SICE) is one I briefly entertained in my own blog in a post <a href="http://culturefuture.blogspot.com/2009/05/nea-nro-and-who-sice.html">here</a>, where I proposed that WHO-SICE would become the patron saint of young, new arts organizations and the NEA would become a caretaker of large, old, established organizations in the traditional grant-making structure. I&#8217;m not sure if creating shared measurement systems would fall more under WHO-SICE or NEA in that dichotomy. However, if you note what furor was whipped up around the NEA when they tried to get individual artists to participate in a National Day of Service, you&#8217;ll see that any scheme in which the NEA helps the arts without being accused of influencing the artists themselves is probably a better direction for the NEA.</p>
<p>So, thanks to Ms. Vaidyanathan for directly responding to my analysis of &#8220;Breakthroughs in Shared Measurement.&#8221; I think we&#8217;re both basically in agreement that there&#8217;s a definite opportunity for a follow-up report three to five years from now that will be able to dive into the impact of the shared measurement systems with more depth and quantitative rigor. The youth of the programs in question prevent answering many of my questions at this time, but I appreciate the opportunity to air them and get responses.</p>
]]></content:encoded>
			<wfw:commentRss>https://createquity.com/2009/10/response-to-arts-policy-library-breakthroughs-in-shared-measurement/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Arts Policy Library: Breakthroughs In Shared Measurement</title>
		<link>https://createquity.com/2009/10/arts-policy-library-breakthroughs-in-shared-measurement/</link>
		<comments>https://createquity.com/2009/10/arts-policy-library-breakthroughs-in-shared-measurement/#comments</comments>
		<pubDate>Thu, 08 Oct 2009 06:24:38 +0000</pubDate>
		<dc:creator><![CDATA[Guy Yedwab]]></dc:creator>
				<category><![CDATA[Philanthropy]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[arts policy library]]></category>

		<guid isPermaLink="false">https://createquity.com/?p=777</guid>
		<description><![CDATA[[Note to readers: I&#8217;m very pleased to introduce to you the first Arts Policy Library entry (not to mention first Createquity post of any kind) not written by me. Guy Yedwab is a budding theater professional who first became known to me through the magic of Twitter and later through his blog, CultureFuture. Currently a<a href="https://createquity.com/2009/10/arts-policy-library-breakthroughs-in-shared-measurement/" class="read-more">Read&#160;More</a>]]></description>
				<content:encoded><![CDATA[<p><em>[Note to readers: I&#8217;m very pleased to introduce to you the first Arts Policy Library entry (not to mention first Createquity post of any kind) not written by me. <a href="http://www.guyyedwab.com/">Guy Yedwab</a> is a budding theater professional who first became known to me through the <a href="http://twitter.com/indymill">magic of Twitter</a> and later through his blog, <a href="http://culturefuture.blogspot.com/">CultureFuture</a>. Currently a senior at New York University, he has already founded a <a href="http://www.organsofstate.org/">theater company</a> and a <a href="http://www.indymill.org/">publishing house</a> in the midst of going to school full-time, completing various internships and odd jobs, and now, writing for Createquity. I&#8217;m excited to have Guy on board and hope you&#8217;ll give him a warm welcome. -IDM]</em></p>
<p><a href="https://createquity.com/wp-content/uploads/2009/10/Breakthroughs_in_Measurement_Comp1.gif"><img decoding="async" class="alignleft size-full wp-image-788" style="margin: 5px 10px;" title="Breakthroughs in Shared Measurement and Social Impact" src="https://createquity.com/wp-content/uploads/2009/10/Breakthroughs_in_Measurement_Comp1.gif" alt="Breakthroughs in Shared Measurement and Social Impact" width="80" height="117" /></a>In “<a href="http://www.fsg-impact.org/ideas/item/breakthroughs_in_measurement.html">Breakthroughs in Shared Measurement and Social Impact</a>” by <a href="http://www.fsg-impact.org/">FSG Social Impact Advisors</a>, authors Mark Kramer (FSG&#8217;s co-founder), Marcie Parkhurst, and Lalitha Vaidynathan take a look at the different ways foundations and their grantees have tackled the lack of performance measurement standards in the nonprofit sector. The short 26-page report was released July 2009 and funded by the <a href="http://www.hewlett.org/">William and Flora Hewlett Foundation</a>, one of the largest foundations in America. FSG Social Impact Advisors is a nonprofit consulting firm that helps foundations and philanthropic organizations make effective grants. It should be noted that the Hewlett Foundation participates in one of the shared measurement programs documented in the report, run by the <a href="http://www.effectivephilanthropy.org/">Center for Effective Philanthropy</a>.</p>
<p><strong>SUMMARY</strong></p>
<p>Kramer et al. are clear about their aims early on. The paper opens as follows:</p>
<p>“A surprising new breakthrough is emerging in the social sector: A handful of innovative organizations have developed web-based systems for reporting the performance, measuring the outcomes, and coordinating the efforts of hundreds or even thousands of social enterprises within a field. These nascent efforts carry implications well beyond performance measurement, foreshadowing the possibility of profound changes in the vision and effectiveness of the entire nonprofit sector.”</p>
<p>It is these breakthroughs that  the report seeks to document, primarily through interviews with participants and summaries of the systems involved. The authors isolate three important categories of breakthroughs, each building on the last:</p>
<ol>
<li><strong>Shared      Measurement Platforms</strong>, which are an agreed-upon set of benchmarks      developed by funding organizations and their grantees;</li>
<li><strong>Comparative      Performance Systems</strong>, which build upon a Shared Measurement Platform      and look for ways to compare the results between different grantee      organizations; and</li>
<li><strong>Adaptive      Learning Systems</strong>, which seek to leverage both of the above systems to      develop strategies and coordinate resources between multiple foundations      and grantees.</li>
</ol>
<p>The report looked at twenty different efforts of varying sizes and types.</p>
<p>Kramer et al. first examine the need for these initiatives. Each of these systems emerged in response to the same fundamental problem: the extreme inefficiency of the grant application process. Although foundations in the same field were attempting to evaluate the same organizations, each had its own process and its own benchmarks. Grantee organizations found themselves wasting large amounts of time filling out different applications, spending significant contributed income simply on the process of acquiring more money for their operations. Meanwhile, foundations and grantee organizations were not learning from either each other or their peers. The problem was described vividly in <a href="http://www.projectstreamline.org/">Project Streamline</a>&#8216;s 2008 report <a href="http://www.projectstreamline.org/documents/PDF_Report_final.pdf">“Drowning in Paperwork, Distracted from Purpose.”</a></p>
<p>In addition to the problem of overhead and waste inherent in duplicative grant applications, the foundations involved in these experiments recognized that organizations did not have professional benchmarks to judge their own success. In the private sector, companies have specific quantitative standards to assess their impact: market share, revenues, etc. But in the field of nonprofits, Kramer et al. note that foundations currently face a choice between two equally problematic ways of evaluating impact: haphazard self-reporting, or expensive third-party auditing. Foundations and their grantees have a mutual interest in systematically overcoming these difficulties.</p>
<p>To address these needs, a number of different and independent Shared Measurement Systems have been set up, usually by a group of large foundations, as reusable yardsticks for their applicants,. These systems typically take the form of web-based applications that allow representatives of a grantee organization to plug in data from their own organization and analyze the results. In the case of Comparative Performance Systems, the participants can compare this data against other users in their field. Adaptive Learning Systems improve on this system by using real-world, face-to-face meetings to discuss the meaning of these results with others in the field.</p>
<p>The report notes certain obstacles to implementing these programs, which all seem attributable to fears on the part of participants. Grantees fear the complexity of some of these systems, worry about disclosing too much internal information, and are afraid of running afoul of funding biases if they participate. Foundations are hesitant to spend money on developing such systems because the spending does not go directly towards their stated goals; although such systems might indirectly help the homeless or the environment, the impact is less immediate. There is also the “free-rider” problem: foundations might pour substantial time and effort into developing a system, only to have later foundations and grantees benefit without having paid in originally. The authors note, however, that in the nonprofit field, concern about the free-rider problem should be less prevalent, seeing as the whole point of philanthropy is to benefit others.</p>
<p>The report identifies eight success factors for implementing these systems:</p>
<ol>
<li>Strong      leadership and substantial funding</li>
<li>Broad      engagement by many organizations</li>
<li>Voluntary      participation</li>
<li>Web-based      technology</li>
<li>Independence      from funders</li>
<li>Ongoing      staffing to support member organizations</li>
<li>Testing      and continual improvement</li>
<li>Users      periodically swapping information.</li>
</ol>
<p>The appendix contains four case studies and detailed information about the 20 organizations examined.  The organizations tackle issues as diverse as housing and economic development, cultural development, education, and environmental preservation—there is even an Adaptive Learning System for marine fisheries.</p>
<p>The four case studies have a number of notable elements in common. Most of these programs involved web-based data-collection and sharing and a support organization to help their members. Many are free to use, but others have subscription costs that can range into the thousands—although the authors are quick to note that the member organizations still save money over <em>not </em>using the system. Most of the programs were started by a few large, locally influential funders, such as the <a href="http://www.packard.org/">David and Lucile Packard Foundation</a> or the <a href="http://www.acumenfund.org/">Acumen Fund</a>. The initial investment averages roughly $1.2 million (excluding the <a href="http://www.pewtrusts.org/our_work_detail.aspx?id=20">Cultural Data Project</a>, a clear outlier at $2.3 million), and time to develop ranges from 2 to 5 years.</p>
<p><strong>ANALYSIS</strong></p>
<p>I found the report’s conclusions intriguing and the arguments put forward in favor of these shared measurement systems persuasive. The common-sense approach to solving problems through sharing of information and reducing overhead seems so intuitive that it&#8217;s hard to believe that these programs aren&#8217;t more widespread. My only criticism of the report is its lack of depth of examination and analysis of the implementation and effectiveness of the various programs.</p>
<p>The report is entirely qualitative in nature. For the most part, the authors limit themselves to descriptions of the programs they examine; there is no quantitative look at the effectiveness of any of the programs, or any analysis of the metrics used within the programs. The qualitative analysis is drawn entirely from interviews of participants in the organizations, and the interviews are all positive in nature. I wouldn&#8217;t go so far as to ascribe this to deliberate bias on behalf of the authors, but the lack of any sustained criticism from the participants or perspectives from organizations that chose not to participate may create an overly rosy picture.</p>
<p>The report holds that the trends described point toward a new direction for the relationship between grantees and the foundations that support them. The opportunities for reducing overhead and bringing order to the chaos of searching for foundation support are clear. In other contexts, this approach has been effective: the <a href="https://www.commonapp.org/CommonApp/default.aspx">Common App</a>, for instance, has made the process of applying for colleges somewhat more bearable for the thousands of students who use it.</p>
<p>Furthermore, there is a particular passage in the report where Kramer et al. seem to reach past the stated purpose of the programs to see an even larger picture. They note that the now-famous success of the Harlem Children&#8217;s Zone stems from the powerful coordination between all aspects of the education process. They see the potential for Adaptive Learning Systems to create the groundwork for networks of multiple organizations, as closely coordinated as the<a href="http://www.hcz.org/"> Harlem Children&#8217;s Zone</a>, working in tandem to accomplish the same goals in other contexts.</p>
<p>Kramer et al. already see this happening in one example, the <a href="http://www.strivetogether.org/">Strive Initiative.</a> Strive is a large-scale partnership in the Greater Cincinnati area that brings together three public school districts, one diocesan district, eight universities/community colleges, and hundreds of other education nonprofits. Strive sets forward a series of Community-level Progress Indicators, measuring what percentage of children are performing adequately in each year. Furthermore, educational nonprofits using similar approaches to improve these progress indicators collaborate in Student Success Networks (SSN), such as the Tutoring SSN that encapsulates school districts, tutoring organizations, and the Cincinnati Metropolitan Housing Authority. What began as a shared measurement system (using the Community-level Progress Indicators) developed into an adaptive learning system (the SSNs).</p>
<p>In a way, the report&#8217;s argument for the impact of these measurements reminds me of <a href="http://www.creativeclass.com/">Richard Florida</a>&#8216;s statement that the culture of the creative class could transform the effectiveness of manufacturing and service industries; here, it seems that the Organization Man has something positive to give to the creative class.</p>
<p>Despite the clear logic and potential of all these elements, the report offers little examination of the impact of these programs, and particularly little examination of the actual systems of measurement themselves. Examples of questionnaires and metrics are provided, but the lack of critical analysis of their effectiveness and use makes it difficult to evaluate their impact. The only empirical evidence put forward is a circumstantial increase in funding for the arts in Philadelphia, possibly as a result of the organization and empirical benchmarks put forward by the Cultural Data Project, but this is only one data point and the causation isn&#8217;t very clear.</p>
<p>In fairness, there are a number of reasons why the report does not attempt that level of depth. Most of the projects examined are less than a decade in operation, and many of them have not yet been fully implemented. The “breakthroughs” hailed by Kramer et al. are still quite in their infancy, and it may be difficult to judge their full impact at this time. Still, an attempt to analyze the metrics used by the different organizations would have been very helpful in understanding how they operate.</p>
<p>Perhaps the goals of this report should be revisited in the next decade, with a more detailed and quantitative analysis of the effects of these programs. A future report could create more specific guidelines for how to create new successful systems, and share the lessons learned by the early trailblazers. This could have the added effect of lowering the development time and cost of new systems, as well as building an even stronger case for adoption and expansion.</p>
<p><strong>IMPLICATIONS</strong></p>
<p>The implications of this report point the way toward an excellent new strategy for approaching philanthropy, and by extension arts and culture. Although most of the organizations examined were not arts organizations, it is clear that the lessons translate across boundaries from anti-poverty groups to performing arts. The <a href="http://www.culturaldata.org/">Cultural Data Project</a>, specifically, looks to be the vehicle for this change.</p>
<p>There is, however, an arts-specific implication overlooked by the authors of the report, and this is the aspect that most interested me. The report focuses largely on the current structure of these shared measurement organizations, namely led by large private foundations. It does not investigate the possibility of public sector involvement. For instance, the report discusses the reluctance of foundations to embark on this sort of indirect investment. This weakness, however, becomes a strength when applied to the public sector. One of the political barriers to the debate in favor of arts funding has been a reluctance (to put it mildly) of public officials to risk funding artists and arts organizations directly, especially at the national level. It is exceptionally easy to attack the arts by attacking a few examples of controversial organizations, and there will always be a toxic debate surrounding which organizations to support and by how much.</p>
<p>The power of this report is that it puts forward another strategy of supporting philanthropy (and in specific the arts), one which avoids the problem of directly supporting organizations. Instead, it proposes an <em>informational infrastructure </em>for the arts.</p>
<p>In this age, we&#8217;ve come to recognize that knowledge and data have the particular power to maximize the impact of any organization. The government already provides its greatest support for the arts in infrastructural investments that are equally available to all arts organizations—the institutions of copyright and nonprofit tax credits are, from one perspective, infrastructural investments in the arts. The National Endowment of the Arts could appropriate a fraction of its budget to create national standards for data sharing in the arts, eventually creating adaptive learning systems for the arts.</p>
<p>The political advantage of this approach is that it avoids the picking and choosing of organizations that touch? on sensitive issues of censorship, ideology, and artistic merit. Foundations are better places to support individual organizations and artists—this way, the government&#8217;s investment will facilitate the ability of foundations to support those artists and organizations.</p>
<p>To a large extent, this is the federal government&#8217;s approach to stimulating industry. It would be very difficult to measure the impact of the Bureau of Labor Statistics on the national economy, but no one would deny that for a relatively modest cost it provides huge benefits to the world of labor. So it could be for the arts.</p>
<p>Consider, for a moment, that with an investment of $2.3 million, such an infrastructure is being established in seven states in the form of the Cultural Data Project. Suppose that the cost per state of this is fixed; it would thus take $16.4 million to cover all 50 states (this is a very rough approximation, just for the current point I&#8217;m making). This is still only a portion of the $50 million in stimulus money appropriated for the arts, and I suspect that the cost would likely be less than that. In other words, for a very plausible amount of money, the National Endowment of the Arts could invest in a project whose benefits would be accessible to <em>every </em>arts organization.</p>
<p>This report is an important first step in building such an approach. Although more work remains to be done, the report provides a clear method of cutting overhead, sharing information, and eventually building a strategy for coordinating the arts.</p>
<p>[<strong>UPDATE</strong>: Don&#8217;t miss Lalitha Vaidyanathan&#8217;s (one of the report&#8217;s co-authors) <a href="https://createquity.com/2009/10/arts-policy-library-breakthroughs-in-shared-measurement.html#comment-653">lengthy comment below</a> and Guy Yedwab&#8217;s <a href="https://createquity.com/2009/10/response-to-arts-policy-library-breakthroughs-in-shared-measurement.html">response</a>.]</p>
]]></content:encoded>
			<wfw:commentRss>https://createquity.com/2009/10/arts-policy-library-breakthroughs-in-shared-measurement/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
	</channel>
</rss>
