<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Createquity.Createquity.</title>
	<atom:link href="https://createquity.com/tag/impact-assessment/feed/" rel="self" type="application/rss+xml" />
	<link>https://createquity.com</link>
	<description>The most important issues in the arts...and what we can do about them.</description>
	<lastBuildDate>Wed, 15 Jul 2020 20:17:39 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Come be nerdy with Ian and Nina Simon in Santa Cruz!</title>
		<link>https://createquity.com/2014/01/come-be-nerdy-with-ian-and-nina-simon-in-santa-cruz/</link>
		<comments>https://createquity.com/2014/01/come-be-nerdy-with-ian-and-nina-simon-in-santa-cruz/#respond</comments>
		<pubDate>Fri, 24 Jan 2014 17:59:45 +0000</pubDate>
		<dc:creator><![CDATA[Ian David Moss]]></dc:creator>
				<category><![CDATA[Research]]></category>
		<category><![CDATA[Animating Democracy]]></category>
		<category><![CDATA[community development]]></category>
		<category><![CDATA[Fractured Atlas]]></category>
		<category><![CDATA[impact assessment]]></category>
		<category><![CDATA[museums]]></category>
		<category><![CDATA[Nina Simon]]></category>
		<category><![CDATA[WolfBrown]]></category>

		<guid isPermaLink="false">https://createquity.com/?p=6223</guid>
		<description><![CDATA[Have you ever wondered what all this impact assessment and evaluation stuff is all about, but haven&#8217;t been sure how to get started? I bet you&#8217;re not alone! That&#8217;s why I&#8217;m psyched to be involved with a great and affordable professional development event happening this summer in gorgeous Santa Cruz, CA, called Museum Camp 2014:<a href="https://createquity.com/2014/01/come-be-nerdy-with-ian-and-nina-simon-in-santa-cruz/" class="read-more">Read&#160;More</a>]]></description>
				<content:encoded><![CDATA[<p>Have you ever wondered what all this impact assessment and evaluation stuff is all about, but haven&#8217;t been sure how to get started? I bet you&#8217;re not alone! That&#8217;s why I&#8217;m psyched to be involved with a great and affordable professional development event happening this summer in gorgeous Santa Cruz, CA, called <a href="http://www.santacruzmah.org/museumcamp2014/">Museum Camp 2014: Social Impact Assessment</a>.</p>
<p style="text-align: center;"><a href="http://www.fracturedatlas.org/site/blog/wp-content/uploads/2014/01/promo_image.png"><img fetchpriority="high" decoding="async" class="aligncenter size-full wp-image-11294" title="promo_image" alt="promo_image" src="http://www.fracturedatlas.org/site/blog/wp-content/uploads/2014/01/promo_image.png" width="494" height="363" /></a></p>
<p>Museum Camp is a creation of <a href="http://museumtwo.tumblr.com/">Nina Simon</a> and the <a href="http://www.santacruzmah.org/">Santa Cruz Museum of Art &amp; History</a>. Createquity readers might recognize Nina and her fantastic work at Santa Cruz MAH from such Top 10 Arts Policy Stories posts as <a href="https://createquity.com/2013/01/the-top-10-arts-policy-stories-of-2012.html">2012</a>&#8216;s, not to mention many shout-outs before and since in blog posts here and there. Nina used to be a rockstar experience design consultant in the museum field and earned a measure of fame at the beginning of this decade as the author of <em><a href="http://www.participatorymuseum.org/">The Participatory Museum</a></em>, which you can read online for free. A couple of years ago, she decided to take the job as director of the Santa Cruz MAH, and she and her team have been up to amazing things since then, including a <a href="http://www.santacruzmah.org/museumcamp2013/">previous version</a> of Museum Camp that sounded like <a href="http://museumtwo.blogspot.com/2013/07/hack-museum-camp-part-2-making-magic.html">pretty</a> <a href="http://www.santacruz.com/news/2013/07/16/a_night_in_the_museum1">much</a> the most fun anyone has had in a museum ever.</p>
<p>All that fun ultimately adds up to something significant, though, and it&#8217;s important to be able to describe what&#8217;s meaningful about what we do effectively and convincingly to people who weren&#8217;t there &#8211; not to mention ourselves. So my colleagues at <a href="http://www.fracturedatlas.org">Fractured Atlas</a> and I are happy to be helping Nina bring a new edition of Museum Camp to life focused on social impact assessment, a three-day event in which small teams of people will develop creative ways to evaluate the work that diverse organizations are doing to transform communities. Our focus is on social impact in communities, and we will encourage teams to look at complex outcomes–like safety, cohesion, compassion, and identity–that are not commonly covered in standard evaluative practices. This is a learning experience with a heavy focus on actual doing throughout the event. In addition to representatives from Fractured Atlas and MAH, we&#8217;ll have &#8220;camp counselors&#8221; from the United Way, <a href="http://wolfbrown.com/">WolfBrown</a>, <a href="http://www.harderco.com">Harder &amp; Co.</a>, <a href="http://animatingdemocracy.org/">Animating Democracy</a>, and more on hand to help attendees navigate the conceptual and practical issues associated with measuring what matters.</p>
<p>If you are interested in attending, you can <a href="http://www.santacruzmah.org/museumcamp2014/apply-now/">fill out an application</a><span> through February 28. Space is extremely limited, so the sooner the better. We look forward to seeing you!</span></p>
]]></content:encoded>
			<wfw:commentRss>https://createquity.com/2014/01/come-be-nerdy-with-ian-and-nina-simon-in-santa-cruz/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Uncomfortable Thoughts: Are We Missing the Point of Effective Altruism?</title>
		<link>https://createquity.com/2013/12/uncomfortable-thoughts-are-we-missing-the-point-of-effective-altruism/</link>
		<comments>https://createquity.com/2013/12/uncomfortable-thoughts-are-we-missing-the-point-of-effective-altruism/#comments</comments>
		<pubDate>Wed, 04 Dec 2013 14:44:48 +0000</pubDate>
		<dc:creator><![CDATA[Talia Gibas]]></dc:creator>
				<category><![CDATA[Philanthropy]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[class]]></category>
		<category><![CDATA[collective impact]]></category>
		<category><![CDATA[data]]></category>
		<category><![CDATA[effective altruism]]></category>
		<category><![CDATA[GiveWell]]></category>
		<category><![CDATA[impact assessment]]></category>
		<category><![CDATA[measurement in the arts]]></category>
		<category><![CDATA[opportunity costs]]></category>
		<category><![CDATA[privilege]]></category>
		<category><![CDATA[uncomfortable thoughts]]></category>

		<guid isPermaLink="false">https://createquity.com/?p=5892</guid>
		<description><![CDATA[People who want to do the most amount of good possible with the resources available don't tend to take the arts very seriously. What if they're right?]]></description>
				<content:encoded><![CDATA[<div id="attachment_5894" style="width: 385px" class="wp-caption aligncenter"><a href="http://flic.kr/p/4re3d"><img decoding="async" aria-describedby="caption-attachment-5894" class="size-full wp-image-5894" src="https://createquity.com/wp-content/uploads/2013/11/38871148_d92a4805531.jpg" alt="&quot;I want change&quot; by m.a.r.c." width="375" height="500" srcset="https://createquity.com/wp-content/uploads/2013/11/38871148_d92a4805531.jpg 375w, https://createquity.com/wp-content/uploads/2013/11/38871148_d92a4805531-225x300.jpg 225w" sizes="(max-width: 375px) 100vw, 375px" /></a><p id="caption-attachment-5894" class="wp-caption-text">&#8220;I want change&#8221; by m.a.r.c.</p></div>
<p>Toward the end of the summer, bioethicist Peter Singer raised the hackles of art lovers everywhere with a <a href="http://www.nytimes.com/2013/08/11/opinion/sunday/good-charity-bad-charity.html?pagewanted=all&amp;_r=0">New York Times op-ed</a> that considered a hypothetical dilemma: should you donate to a charity that combats blindness in the developing world or should you spend that money instead on an art museum? After running through a cost-benefit analysis of each option, he determined that the charity addressing blindness “offers [donors] at least 10 times the value” of the museum.</p>
<p>Ouch.</p>
<p>To no one&#8217;s surprise, the arts community didn’t exactly roll out the welcome mat for the piece, calling Singer’s argument “<a href="http://www.giarts.org/blog/janet/either-or-harmful-charities-and-society">a shocker</a>,” “<a href="http://www.artsjournal.com/realcleararts/2013/08/peter-singer-says-never-give-to-the-arts.html#comment-31415">absurd</a>,” and “<a href="http://creativeinfrastructure.org/2013/08/11/eitheror-or-and/">tyrannical</a>.” Another round of alarm ensued recently when none other than megaphilanthropist Bill Gates <a href="http://www.ft.com/intl/cms/s/2/dacd1f84-41bf-11e3-b064-00144feabdc0.html">threw his support</a> behind Singer’s thesis. The responses from our field to date have generally coalesced around two broad counter-arguments:</p>
<ul>
<li><b> Why does it have to be “either/or”? Why can’t we support both? </b>Singer forces a false choice in “<a href="http://blog.artsusa.org/2013/08/22/responses-to-peter-singers-good-charity-bad-charity-in-the-new-york-times/">assuming charitable giving is a zero sum game</a>.” Weighing the value of saving a life against the value of donating to an art museum is <a href="http://www.fracturedatlas.org/site/blog/2013/08/20/everyones-favorite-whipping-boy/">comparing apples to oranges</a> when “both are essential, and if either disappeared you’d be in bad shape.” We need a holistic approach to ensure we don&#8217;t &#8220;<a href="http://artscultureandcreativeeconomy.blogspot.com/2013/11/what-does-effective-altruism-mean-for.html">solv[e] Third World crises at the expense of fostering crises right here at home</a>.&#8221; Just as we have “<a href="http://www.giarts.org/blog/janet/either-or-harmful-charities-and-society">multiple passions in [our] lives</a>,” donors can and should target multiple causes and direct their charitable dollars in a “<a href="http://creativeinfrastructure.org/2013/08/11/eitheror-or-and/">proportionally prioritized</a>” manner. Anyway, we can’t <i>really </i>be sure than curing blindness is more important than inspiring the next Jackson Pollock, and even if we were, concentrating all our resources with one or two tried and true nonprofits runs counter to the “<a href="http://www.fracturedatlas.org/site/blog/2013/08/20/everyones-favorite-whipping-boy/">messiness and power of America’s [decentralized] approach to charity</a>.”</li>
</ul>
<ul>
<li><b>Saving lives is all fine and good – but only if those lives have meaning. </b>If we’re so concerned with making sure that people can see, shouldn’t we also try to make sure they <a href="http://online.wsj.com/news/articles/SB10001424052702303531204579205770596464870">have beautiful things to look at</a>? Singer’s logic is dangerous because he fails to acknowledge the “<a href="https://aamd.org/for-the-media/press-release/aamd-members-respond-to-good-charity-bad-charity">creative outlet[s] and emotional oas[e]s that only art museum[s] can provide</a>.” If all philanthropic dollars were channeled toward alleviating disease and poverty, arts and culture would languish, society would become monochromatic and dull, and life would <a href="http://www.nytimes.com/2013/08/15/opinion/is-there-a-better-worthy-cause.html">cease to be worth living</a>.</li>
</ul>
<p>As satisfying as these rebuttals may feel to arts advocates, they unfortunately miss the point. The crucial assumptions behind Singer’s argument are that</p>
<ol>
<li>“<b>there are objective reasons for thinking we may be able to do more good in one [sector] than in another</b>,” and</li>
<li><b>we have a moral obligation to make choices that do as much good as possible.</b></li>
</ol>
<p>It’s important to understand this perspective in the context of “effective altruism,” a <a href="http://www.youtube.com/watch?v=O02-06mdkC4&amp;feature=youtu.be">relatively nascent but growing area of applied ethics</a> that has been <a href="https://createquity.com/2013/11/no-strings-attached.html">featured</a> <a href="https://createquity.com/2009/01/revisiting-givewell.html">more</a> <a href="https://createquity.com/2009/12/givewell-grows-up.html">than</a> <a href="https://createquity.com/2008/07/rise-and-fall-and-rise-again-of.html">once</a> on this blog, not to mention a recent edition of <a href="http://www.thisamericanlife.org/radio-archives/episode/503/i-was-just-trying-to-help?act=1#play"><i>This American Life</i></a>. Besides Gates, fellow philanthropic heavyweight and <a href="http://www.ssireview.org/blog/entry/the_promise_of_effective_altruism">past Hewlett Foundation President Paul Brest</a> has declared himself a fan<i>. </i>“Effective altruists,” or EAs, are on a quest to “do good” by way of hard-nosed rationality. “Doing good” doesn’t mean recycling a little more, or occasionally doling out spare change to a beggar on the street. It doesn’t mean foregoing a high-powered corporate career to work for a nonprofit. It means taking the time to analyze how to do the <i>most amount of good possible with the resources available</i> – or, to use a more nerdy turn of phrase, to “<a href="http://www.givingwhatwecan.org/switzerland/events.php">[use] science and rational decision-making to help as many sentient beings</a>” as they can.</p>
<p>Most funders are already in search of a big “bang for your buck,” but in trying to identify the objectively best causes to support, effective altruists stray from the conventional wisdom of mainstream philanthropy. EAs <a href="http://www.effective-altruism.com/four-focus-areas-effective-altruism/">cast a global net</a> when determining where to focus, and often settle on <a href="http://www.givingwhatwecan.org/where-to-give/recommended-charities">supporting causes in faraway parts of the world</a>, the results of which they may never see in person. They also believe that while human lives are created equal, philanthropic causes <a href="http://blog.givewell.org/2012/05/02/strategic-cause-selection/">are not</a>. Those causes that can save or improve the most lives must take first priority.</p>
<p>How does this play out in practice? Let’s say you donate to the free medical clinic in your area. You do this for good reasons: you care about inequities in the American healthcare system, and want to give back to your community. You like the feeling you get when you walk by that clinic every day. Maybe you even know people who benefit from the services the clinic provides. The clinic gets its donation, and you get warm fuzzies. Everybody wins. Right?</p>
<p>Not so, an EA would counter. Despite your good intentions, your donation amounts to a <a href="http://www.givewell.org/giving101/Your-dollar-goes-further-overseas">near-waste of resources:</a></p>
<blockquote><p>We understand the sentiment that ‘charity starts at home,’ and we used to agree with it, until we learned just how different U.S. charity is from charity aimed at the poorest people in the world. Helping people in the U.S. usually involves tackling extremely complex, poorly understood problems… In the poorest parts of the world, people suffer from very different problems…</p>
<p>We estimate that it costs [Givewell’s] top-rated international charity less than $2,500 to save a human life… Compare that with even the best U.S. programs… over $10,000 per child served, and their impact is encouraging but not overwhelming.</p></blockquote>
<p>EAs <a href="http://www.effective-altruism.com/category/what-is-effective-altruism/">advocate</a> making evidence-based decisions even if they don’t resonate on an emotional or intuitive level:</p>
<blockquote><p>Effective altruism is consistent with believing that giving benefits the giver, but it’s not consistent with making this the driving goal of giving. Effective altruists often take pride in their willingness to give (either time or money) based on arguments that others might find too intellectual or abstract, and their refusal to give suboptimally even when a pitch is emotionally compelling. The primary/driving goal is to help others, not to feel good about oneself.</p></blockquote>
<p>If this approach leaves you with an empty feeling in the back of your throat, it is by design. “Opportunity costs” – the costs of choosing <i>not </i>to behave in a certain way – weigh heavily on EAs. Every time you make a donation, <a href="http://www.effective-altruism.com/category/efficient-charity/">considering where your money <i>could have gone</i></a><i> </i>is as important as considering where it will ultimately go (emphasis mine):</p>
<blockquote><p>In the “Buy A Brushstroke” campaign, eleven thousand British donors gave a total of £550,000 to keep the famous painting “Blue Rigi” in a UK museum. If they had given that £550,000 to buy better sanitation systems in African villages instead, the latest statistics suggest it would have saved the lives of about one thousand two hundred people from disease…  Most of those 11,000 donors genuinely wanted to help people … But these people didn’t have the proper mental habits to realize <b>that was the choice before them</b>, and so a beautiful painting remains in a British museum and somewhere in the Third World a thousand people are dead.</p></blockquote>
<p>Weighing choices isn’t limited to how we spend our money – it also applies to <a href="http://80000hours.org/about-us">how we spend our time</a>. Just as EAs <a href="http://www.effective-altruism.com/category/what-is-effective-altruism/">dispute the notion</a> that people should support whichever charities they feel “passionate” about, they question whether channeling those passions into a nonprofit or medical career is the best way to make a difference. Many suggest instead that people “<a href="http://80000hours.org/earning-to-give">earn to give</a>,” saying they “might be better off…in a high-earning job and making a deliberate commitment to give a large portion of what [they] earn away.“ The organization <a href="http://www.80000hours.org">80,000 Hours</a>, founded to “become the world’s number one source for advice on pursuing a career that truly makes a difference in an effective way,” <a href="http://80000hours.org/blog/183-the-worst-ethical-careers-advice-in-the-world">elaborates</a>,</p>
<blockquote><p>Working at a non-profit can be a great way to make a difference. But it’s no guarantee. Amazingly, lots of non-profits probably have <strong>no</strong> <strong>impact</strong>. And do workers at [a] non-profit have more impact than the people who fund them? The researchers who push forward progress? The entrepreneurs who transform the economy? Policy makers? Maybe. No one stops to ask.</p></blockquote>
<p>Putting ideas like these on the table is a great way to make those of us in the arts squirm. While there are echoes of the effective altruism movement in some recent trends within our field, like the “<a href="http://arts.gov/news/2013/national-endowment-arts-chairman-joan-shigekawa-announces-350000-research-grants">universal call</a>” for better data on the impact of the arts and the <a href="http://www.washingtontimes.com/news/2011/oct/10/study-arts-funding-benefits-wealthy-whites/">pointed questions about who ultimately benefits from arts funding</a>, the arts are chock-full of people – artists and arts administrators alike – who were drawn to their work by that same passion that EAs claim clouds our judgment. The idea of allowing cold rationality to dictate and limit our quest to “do good” flies in the face of our artistic sensibilities, and challenges the assumptions many of us made when we entered the nonprofit sector in the first place – even those of us who have a sincere desire to address social inequities.</p>
<p>Tempting as it may be, it would be short-sighted to dismiss the EA movement as the pet project of a bunch of aesthetically stunted curmudgeons. It’s hard to dispute the notion that we could improve the human condition if only we could get our act together and commit our resources to a data-driven approach. After all, the nonprofit darling of the moment, <a href="https://createquity.com/2013/08/collective-impact-in-the-arts.html">collective impact</a>, is based on the same premise. What effective altruism does is counter our cause-specific argument for the arts with a dizzying moral appeal for cause agnosticism. And to be honest, it’s hard to see how the arts win if they play the game by the EAs’ rules. The “both/and” argument mentioned previously is unlikely to sway an effective altruist who weighs each decision as a choice between two different futures, one in which a museum gets funded and <i>some </i>lives get saved and one in which the museum struggles and <i>more</i> lives get saved. Even if the museum shut down completely, its patrons could probably find or create an alternative “creative outlet and emotional oasis,” while the people dying of malaria can’t very well make the mosquito nets themselves. The “we give lives meaning” argument likewise rings hollow when we’re talking about lending privileged lives (anyone living on <a href="http://data.worldbank.org/indicator/SI.POV.2DAY">more than $2 a day</a> is privileged in a global context) a dose of incremental “meaning” <i>at the expense of </i>giving others a shot at basic survival. It also comes across as incredibly condescending to those others considering that they would likely never get the opportunity to visit or benefit from Singer’s hypothetical museum. In any case, art is hardly the only possible delivery mechanism for meaning. <a href="http://blog.givewell.org/2013/08/20/excited-altruism/">In the words of one effective altruist</a>,</p>
<blockquote><p>Trying to maximize the good I accomplish with both my hours and my dollars is an intellectually engaging challenge. It makes my life feel more meaningful and more important. It’s a way of trying to have an impact and significance beyond my daily experience. In other words, it meets the sort of non-material needs that many people have.</p></blockquote>
<p>Whether the EA movement sputters or gathers steam, taking the time to engage with its principles, even critically, is a healthy exercise. The bottom line is that EAs may actually be onto something when they argue it’s possible to make a bigger dent in one sector than another. Rather than insisting otherwise or dodging the argument altogether, we could heed the call to examine how altruism really manifests in our work, particularly when examined through the lens of <i>what benefits the people we engage, </i>rather than what benefits our organizations or our donors. Might we, too, have objective reasons for thinking we may be able to do more “good” in one program, or with one population, than in another? Do we, too, have a moral obligation to maximize that good? How would that change how we operate and who we serve? Do we <i>want </i>to change how we operate?</p>
<p>If the effective altruism debate makes anything clear, it’s that to be able to make art, not to mention argue about it, is to be fortunate. Taking a hard look at our assumptions about what draws and keeps us to this work may not be easy, but if we squirm a little, so be it. In the grand scheme of things, a little squirming is a luxury too.</p>
]]></content:encoded>
			<wfw:commentRss>https://createquity.com/2013/12/uncomfortable-thoughts-are-we-missing-the-point-of-effective-altruism/feed/</wfw:commentRss>
		<slash:comments>12</slash:comments>
		</item>
		<item>
		<title>No Strings Attached</title>
		<link>https://createquity.com/2013/11/no-strings-attached/</link>
		<comments>https://createquity.com/2013/11/no-strings-attached/#comments</comments>
		<pubDate>Thu, 14 Nov 2013 13:14:38 +0000</pubDate>
		<dc:creator><![CDATA[Lindsey Cosgrove]]></dc:creator>
				<category><![CDATA[Economy]]></category>
		<category><![CDATA[Philanthropy]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[cash transfers]]></category>
		<category><![CDATA[Createquity Fellowship]]></category>
		<category><![CDATA[effective altruism]]></category>
		<category><![CDATA[general operating support]]></category>
		<category><![CDATA[GiveDirectly]]></category>
		<category><![CDATA[GiveWell]]></category>
		<category><![CDATA[impact assessment]]></category>
		<category><![CDATA[nonprofit sector]]></category>

		<guid isPermaLink="false">https://createquity.com/?p=5785</guid>
		<description><![CDATA[A few years ago four grad students from Harvard and M.I.T. decided they wanted to use their brains and dollars to improve the lives of some of the poorest people in the world. They researched different strategies of philanthropy, looked at the data available, and based on the evidence they chose a novel approach. No<a href="https://createquity.com/2013/11/no-strings-attached/" class="read-more">Read&#160;More</a>]]></description>
				<content:encoded><![CDATA[<div id="attachment_5838" style="width: 442px" class="wp-caption aligncenter"><a href="http://www.flickr.com/photos/46166795@N08/4804951946/in/photolist-8jACP5-813SdV-d8xxeb-azj4cK-cRb72s-5HJAxV-cvBTYG-6CFdyh-6CFdFG-yHLnS-LsMFe-4CDoFn-52Ra1V-5VWKk8-52Pera-cWGNfN-52Netc-769nyP-7K9tX8-6GVojr-cvBTnU-a1EGV6-6KtJJt-4nEfkp-8yCdtv-3x9zaG-6TqSw8-7DXqyE-6xN4Dx-9wk4SH-fV86pg-fV7E8X-fV7Kph-b4JhqR-6D2n8e" target="_blank"><img decoding="async" aria-describedby="caption-attachment-5838" class=" wp-image-5838    " alt="Photo by Claudia Daggett." src="https://createquity.com/wp-content/uploads/2013/11/Photo-by-Claudia-Daggett1.gif" width="432" height="371" /></a><p id="caption-attachment-5838" class="wp-caption-text">Kenyan shilling. Photo by Claudia Daggett.</p></div>
<p style="text-align: left;" align="center">A few years ago four grad students from Harvard and M.I.T. decided they wanted to use their brains and dollars to improve the lives of some of the poorest people in the world. They researched different strategies of philanthropy, looked at the data available, and based on the evidence they chose a novel approach. No microlending, school-building, or vaccination campaigns for them: they would just <a href="http://www.theatlantic.com/business/archive/2012/12/can-4-economists-build-the-most-economically-efficient-charity-ever/266510/">give away cash</a>, no strings attached. They called their new charity, simply enough, <a href="http://www.givedirectly.org/">GiveDirectly</a>.</p>
<p style="text-align: left;"><b>The Concept</b></p>
<p>This is how it works: money is transferred from the organization to pre-identified families in Kenya via cell phones. GiveDirectly’s selection of recipients is based solely on need as signaled by mud or thatch roofs, as opposed to more durable materials. The standard amount is <a href="http://www.givedirectly.org/index.php">$1,000 over one to two years</a>, about as much as a poor Kenyan family might spend in a single year. There are no restrictions on what a family can buy with money from GiveDirectly. Unlike with more traditional philanthropic efforts, there are no mandatory health check-ups or vaccinations, no obligatory training programs, and no mandates of any kind. The cash is completely unfettered. It is a “UCT”: unconditional cash transfer.</p>
<p>One common use of GiveDirectly cash transfers is the purchase of a metal roof. Mud and thatch don’t hold up well to weather elements and must be repaired or replaced often. There are major savings to be had with the purchase of a metal roof, which typically lasts at least a decade.</p>
<p>That’s not the only way in which recipients spend the money, however. If you own a motorbike in Kenya that can withstand the terrain, won’t break down on long journeys, and can carry a passenger, you can be a taxi driver. With an influx of cash, you can literally buy yourself a livelihood.</p>
<p>These sorts of purchases improve quality of life and increase earning capacity, and some even have a ripple effect beyond a single family. A cow, for example, can provide milk for an entire village, an income stream for the owner, and can create more cows to continue to multiply the benefits.</p>
<p><b>The Logistics</b></p>
<p>Since a normal transfer from bank account to bank account isn’t an option, and transportation and distribution of thousands of dollars of physical cash would require security and increase liability for the charity, the transactions are carried out via cell phone. In Kenya the “mobile-money system” is called M-Pesa, and it’s <a href="http://www.economist.com/blogs/economist-explains/2013/05/economist-explains-18">one of the most successful of its kind</a>; transfers are discreet, simple, and perfect for this atypical exchange.</p>
<div id="attachment_5830" style="width: 532px" class="wp-caption aligncenter"><a href="http://www.flickr.com/photos/22319323@N00/6975541684/in/photolist-bCptw5-9yypdS-9yvocV-9yvoQ2-9WyJYw-7Ytc7D-95BG7d-bnYk3T-dUzm55" target="_blank"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-5830" class=" wp-image-5830    " alt="One M-Pesa location in Kenya. Photo by Fiona Bradley. " src="https://createquity.com/wp-content/uploads/2013/11/mpesa-photo-by-Fiona-Bradley1.gif" width="522" height="291" /></a><p id="caption-attachment-5830" class="wp-caption-text">One M-Pesa location in Kenya. Photo by Fiona Bradley.</p></div>
<p>Using this technology, GiveDirectly can keep track of hundreds of recipient families with a single spreadsheet and send them money each month safely and securely. The recipient gets a text message indicating that the money has arrived and goes to his or her local M-Pesa franchise to pick it up (the one described in the radio show “<a href="http://www.thisamericanlife.org/radio-archives/episode/503/i-was-just-trying-to-help">This American Life</a>”’s GiveDirectly coverage is basically a VW bus turned bank with a ledger, a box of cash, and one staff person). It might sound dodgy to the Western world, but the M-Pesa system is dependable and ubiquitous in Kenya and saves people time and money previously spent traveling to traditional banks or delivering money to family in remote and inaccessible areas.</p>
<p><b>The Evidence </b></p>
<p>As Jacob Goldstein for the <a href="http://www.nytimes.com/2013/08/18/magazine/is-it-nuts-to-give-to-the-poor-without-strings-attached.html?pagewanted=1&amp;_r=1">New York Times</a> writes, “At its most basic level… GiveDirectly’s work is an attempt to test one of the simplest ideas in economics — that people know what they need, and if they have money, they can buy it.” As radical as the approach may seem, it is grounded in a strong evidence base. According to Holden Karnofsky, co-founder of a charity that supports GiveDirectly, “cash transfers…happen to be the <a href="http://blog.givewell.org/2012/12/26/the-case-for-cash-2/">most extensively studied non-health intervention</a> we know of.” Indeed, traditional charities <a href="http://blog.givewell.org/2012/12/26/the-case-for-cash-2/">often call the very act of transferring funds into the hands of low-income people success</a>. In GiveDirectly’s model, by contrast, the impact measures are more nuanced. Money changing hands is the intervention, not the desired outcome.</p>
<p>GiveDirectly is committed to investigating and recording its own impact. The organization recently completed a <a href="http://www.givedirectly.org/index.php">randomized control trial</a> in Kenya, which reveals that; “recipients are not just spending their transfers, providing a <a href="http://economix.blogs.nytimes.com/2013/10/25/how-no-strings-aid-affects-the-poor/">one-time boost to their consumption</a> without affecting their overall well-being.” The trial shows that food-consumption increased 20 percent for transfer recipients and the value of recipients’ livestock increased by 50 percent. The study even showed that recipients’ stress levels improved – <a href="http://economix.blogs.nytimes.com/2013/10/25/how-no-strings-aid-affects-the-poor/">their actual stress hormone levels decreased</a>. The full report and a summary are available <a href="http://www.givedirectly.org/evidence.php">here</a>.</p>
<p><b>The Implications</b></p>
<p>Pondering the broader lessons of UTCs and GiveDirectly, I’m reminded of all of the nonprofit organizations out there for which an influx of funding could really change their organizational “standard of living.” What GiveDirectly is providing goes by another name in the nonprofit sector: general operating support.</p>
<p>General operating support is the holy grail of nonprofit fundraising, defined by the <a href="http://foundationcenter.org/gainknowledge/grantsclass/ntee_gcs.html">Foundation Center</a> as “grants for the day-to-day operating costs of an existing program or organization; also called unrestricted grants.” There may be reporting requirements Kenyans don’t have to adhere to, and an application process with more vetting than GiveDirectly’s system of identifying families in need, but the concepts aren’t too far off. The similarities between general operating support to organizations and cash transfers to families, however, might not be as obvious for some in the nonprofit sector.</p>
<blockquote><p>&#8220;We had conversations with people [in the non-profit sector] who said there was a lot of <a href="http://www.theatlantic.com/business/archive/2012/12/can-4-economists-build-the-most-economically-efficient-charity-ever/266510/">internal resistance to unconditional transfers</a>,&#8221; Niehaus [one of GiveDirectly’s four founders] told [reporter Dana Goldstein]. &#8220;If this works, what are we all here for? Why do we have jobs? There&#8217;s an industry that exists that tries to make decisions for poor people and determine what&#8217;s best for them.&#8221;</p></blockquote>
<p>Shouldn’t we want the same kind of aid for the poor that we in the nonprofit sector would want for ourselves?</p>
]]></content:encoded>
			<wfw:commentRss>https://createquity.com/2013/11/no-strings-attached/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>The Cultural Data Project and Its Impact on Arts Organizations</title>
		<link>https://createquity.com/2013/03/the-cultural-data-project-and-its-impact-on-arts-organizations/</link>
		<comments>https://createquity.com/2013/03/the-cultural-data-project-and-its-impact-on-arts-organizations/#comments</comments>
		<pubDate>Tue, 05 Mar 2013 13:10:12 +0000</pubDate>
		<dc:creator><![CDATA[Talia Gibas and Amanda Keil]]></dc:creator>
				<category><![CDATA[Philanthropy]]></category>
		<category><![CDATA[Policy & Advocacy]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[Cultural Data Project]]></category>
		<category><![CDATA[evaluation]]></category>
		<category><![CDATA[impact assessment]]></category>
		<category><![CDATA[measurement in the arts]]></category>
		<category><![CDATA[Nonprofit Finance Fund]]></category>

		<guid isPermaLink="false">https://createquity.com/?p=4636</guid>
		<description><![CDATA[For all of the predictions flying back and forth about what 2013 holds for the arts and culture sector in the United States, one of the few things we can say with near-certainty is that 2013 will be a year of major transition for the Cultural Data Project (CDP). Our sector’s largest-scale effort to quantify and<a href="https://createquity.com/2013/03/the-cultural-data-project-and-its-impact-on-arts-organizations/" class="read-more">Read&#160;More</a>]]></description>
				<content:encoded><![CDATA[<div id="attachment_4637" style="width: 441px" class="wp-caption aligncenter"><a href="http://www.flickr.com/photos/tensafefrogs/3649985674/sizes/m/"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-4637" class="wp-image-4637 size-full" src="https://createquity.com/wp-content/uploads/2013/03/3649985674_dd0f762b241.jpg" alt="One intersection of Data and art. Photo by Geoff Stearns." width="431" height="500" srcset="https://createquity.com/wp-content/uploads/2013/03/3649985674_dd0f762b241.jpg 431w, https://createquity.com/wp-content/uploads/2013/03/3649985674_dd0f762b241-258x300.jpg 258w" sizes="auto, (max-width: 431px) 100vw, 431px" /></a><p id="caption-attachment-4637" class="wp-caption-text">One intersection of Data and art. Photo by Geoff Stearns.</p></div>
<p>For all of the predictions flying back and forth about <a href="http://archive.constantcontact.com/fs128/1102382269951/archive/1111919139937.html">what 2013 holds for the arts and culture sector in the United States</a>, one of the few things we can say with near-certainty is that 2013 will be a year of major transition for the <a href="http://www.culturaldata.org/">Cultural Data Project</a> (CDP). Our sector’s largest-scale effort to quantify and streamline the “value” of arts and culture across different regions, the CDP is breaking off from its original home at the <a href="http://www.pewtrusts.org/">Pew Charitable Trusts</a> to become a separate nonprofit. Until now, the CDP has been governed by a local Pennsylvania-based board, but it recently launched a <a href="http://www.culturaldata.org/about/boardofdirectors/">national board</a> and <a href="http://www.culturaldata.org/wp-content/uploads/press-release_beth-tuttle_2013-1-30.pdf">announced Beth Tuttle as their new CEO</a>, who will lead a strategic planning process to further evolve its scope and impact. These shifts, according to the CDP, will “put [it] in an even stronger position to serve the arts and culture community” and its “increasingly large and diverse number of constituents.”</p>
<p>As arts organizations are increasingly pressed to demonstrate tangible benefits, this turning point in the CDP’s history provides an opportunity to look back on its trajectory and examine ways to measure the contributions of arts and culture to society.</p>
<p>For those unfamiliar with the CDP and how it came to be, here is a little refresher:</p>
<p><strong>Background of the CDP</strong></p>
<p>The CDP launched in Pennsylvania in 2004 thanks to the collaboration between the Pew Charitable Trusts and a number of local funders. Using the CDP platform, arts and cultural organizations create an annual Data Profile based on their financial audit and quantitative programmatic data, such as the number of exhibitions or workshops held. It is an online tool that aims to help arts and cultural organizations “improve their financial management and services to communities” by streamlining, storing and aggregating data on their financial and programmatic activities each year.</p>
<p>In grantseeking, it functions a little bit like <a href="https://www.commonapp.org/CommonApp/default.aspx">the Common Application</a> for undergraduate college admissions, which some funders have emulated through <a href="http://www.philanthropynewyork.org/s_nyrag/sec.asp?CID=5494&amp;DID=11895">standardized application templates</a>; once organizations create Data Profiles they can then submit customized reports to multiple funders. Arts organizations also have access to a variety of tools that illustrate trends in their performance from year to year, and can compare their data against other organizations in the region or across all states participating in the CDP.</p>
<p><a href="http://www.culturaldata.org/about/reports/">The tools</a> include:</p>
<ul>
<li>A Program Revenue and Marketing Expense Report, which explores the relationship between marketing expenses and attendance figures</li>
<li>A Personnel Report, which outlines costs associated with staffing, salaries and benefits</li>
<li>A Contributed Revenue and Fundraising Expense Report, which examines how changes in fundraising expenses effect contributed revenue</li>
<li>The <a href="http://www.culturaldata.org/2012/05/07/new-financial-health-analysis-for-arts-and-cultural-organizations-by-cdp-and-nff-available-may-22nd/">Financial Health Analysis</a>, developed in partnership with the Nonprofit Finance Fund, which serves as a fiscal health “check up” and provides an overview of financial strengths, weaknesses and business dynamics</li>
</ul>
<p>Users can tailor comparison reports according to detailed specifications. For example, if you’re a mid-sized theater company, you can see how your programming and marketing expenses compare to those of your peers.  You can also see how they compare against, say, all arts organizations that were founded within a certain time frame and target a specific constituent group. CDP Help Desk Staff assists in running those reports, and with cleaning and checking the data that goes into creating a Data Profile, which consists of <a href="http://www.culturaldata.org/wp-content/themes/cdp/pdf/CDP-BlankProfile.pdf">nearly 300 questions</a> and requires 15 to 30 hours to complete.</p>
<p>Why go through such a complex process? For one thing, funders get a standardized profile with grant applications that allows them to easily compare organizations. Accordingly, most funders who support the CDP require that nonprofits submit their Data Profiles and a customized report with their grant applications. Furthermore, many funders are interested in research, and the CDP provides a massive data bank. The more funders require their constituents to use the CDP, the bigger the ultimate payoff for researchers: a regional (and, perhaps one day, national) aggregate of information about arts and cultural organizations, updated annually.</p>
<p>As for the cultural organizations that have to complete the Data Profile, those applying to multiple grantmakers requiring CDP reports receive the benefit of being able to submit detailed financial information in one format. If they participate over several years and take advantage of the reports to track and compare their progress, they also gain a better picture of their own strengths. The drawback comes for organizations with limited capacity to devote to completing the Data Profile, or for those seeking to evaluate more qualitative elements of their work. But therein lie the possibilities.</p>
<p><strong>The CDP’s Impact to Date: The Research Perspective</strong></p>
<p><a href="http://www.culturaldata.org/home/aboutthismap/">Thirteen states</a> currently take part in the CDP, with sixteen more expressing interest. (A list of participating states with links to their CDP Web sites is available <a href="http://www.culturaldata.org/about/national-expansion/">here</a>.) In the near-decade since the CDP launched, how has it benefited the myriad cultural organizations that input data and the researchers who analyze it?</p>
<p><a href="http://www.culturaldata.org/research/">Most researchers</a> to date have used CDP data to try to quantify the broad “impact” of the arts and culture sector in financial and programmatic terms, totaling up the number of people employed by arts and culture organizations in a given area, the number of public events, the number of attendees, etc.  A snippet from Cuyahoga County’s <a href="http://www.culturaldata.org/2012/05/01/cuyahoga-arts-culture-releases-new-report-shows-30-percent-growth-in-regions-arts-sector/">Strengthening Communities 2011</a> report provides one such example:</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-4641 size-full" src="https://createquity.com/wp-content/uploads/2013/03/Screen-Shot-2013-02-28-at-10.14.04-PM1.png" alt="Excerpt from Strengthening Communities" width="754" height="527" srcset="https://createquity.com/wp-content/uploads/2013/03/Screen-Shot-2013-02-28-at-10.14.04-PM1.png 754w, https://createquity.com/wp-content/uploads/2013/03/Screen-Shot-2013-02-28-at-10.14.04-PM1-300x209.png 300w" sizes="auto, (max-width: 754px) 100vw, 754px" /></p>
<p>The ability to quantify the scope and variety of the nonprofit arts and culture communities has proven invaluable to several advocacy campaigns. <a href="http://www.philaculture.org/action/hottopics/5980/cultural-alliance-weighs-state-sales-tax">The Greater Philadelphia Cultural Alliance used CDP data</a> to lead constituents across the state to successfully defeat a proposed tax on arts and culture activities. ArtServe Michigan’s <a href="http://creativestatemi.artservemichigan.org/">Creative State Michigan Report</a>, comprised mainly of CDP data, <a href="http://www.artservemichigan.org/20120209135/news/press-releases/gov-snyders-proposed-funding-for-arts-culture-represents-important-investment-in-michigans-future-prosperity/">prompted the state’s Republican governor to propose a threefold increase</a> in state arts funding.<b><b></b></b></p>
<p>Regardless of <a href="https://createquity.com/2009/09/arts-economic-prosperity-cliffs-notes-version.html">whether studies like these demonstrate that the arts generate economic growth</a>, research resulting from the CDP has undoubtedly helped make the case for investment in the arts and culture and tell a more concrete story about people served. These efforts have been somewhat limited, however, by the “one size fits all” nature of the CDP’s construction. After all, attributes like financial health and the quantity of public events would be more accurately described as outputs rather than outcome or impact measures. The fact that CDP profiles are optional to fill out for organizations whose funders don’t require it also <a href="http://www.culturaldata.org/wp-content/uploads/ca_arts_ecology_tapp_2011sept20.pdf">creates challenges</a> for researchers who wish to generalize from CDP data to all nonprofit arts and culture organizations, since the CDP constitutes a nonrandom sample of such organizations. Recently, funders in <a href="http://www.cac.ca.gov/newsroom/atthecacdetail.php?id=356">California</a> and <a href="http://www.nyc.gov/html/dcla/downloads/pdf/Arts%20_Research_Fund_Announcement.pdf">New York</a> have begun piloting small grant programs to support research projects using the CDP in their states; time will tell if these opportunities lead to a more diverse range of inquiries using the data.<br />
<b><b><br />
The CDP’s Impact to Date: Arts Organizations’ Perspective</b></b></p>
<p>The CDP has been a boon to research and advocacy efforts, but cultural organizations themselves don’t always hear about that work, or take full advantage of the CDP’s resources. In 2012, the CDP conducted a survey of over 1,800 arts organizations charged with filling out a Data Profile every year. Arin Sullivan, the CDP’s Senior Associate for State-Based Activities, said that the survey found that 68 percent of respondents had never read a report that includes CDP data, like Cuyahoga County’s. This implies that researchers, and the CDP itself, need to close the feedback loop between research and the constituents being studied. In addition, according to Sullivan, the survey revealed that more than 40% of participating organizations have never run an annual, trend, or comparison report.<br />
<b><b><br />
</b></b>There are a number of possible reasons for this. Organizations simply may not be aware of the CDP reporting features. They may have limited staff capacity to run and analyze reports, particularly after devoting significant time to completing their Data Profile. Or they are aware of the reporting features, but don’t think they are worth using.</p>
<p>No matter what the reasons, these results suggest a range of perceptions about the CDP. To some organizations, it is simply a profile to complete in order to get a grant. To others, it is a resource that can help better inform their practice. The same survey that found nearly half of organizations don’t use CDP reporting tools also found that 45% of participants understood their own finances better as a result of completing the Profile. Of those respondents that did use CDP reports, 40% said it resulted in better transparency, 45% said they had a better sense of their progress and goals, and 56% said they had a better sense of their organization over time.  These relatively low percentages suggest that even organizations taking full advantage of CDP reports do not always find them of substantial benefit.<br />
<b><b><br />
</b></b>Yet for some organizations, the CDP has been quite useful indeed. <a href="http://www.danceworkschicago.org/">DanceWorks Chicago</a>, for example, changed its entire accounting system to align with the format of the CDP. According to CEO Andreas Böttcher, many small organizations have little expertise in accounting, and until the advent of the CDP had no clear template to follow when setting up their books. Using the CDP was “onerous at first,” he said, “but once you have it, you can build a business plan based on what you have to report.”<br />
<b><b><br />
</b></b>DanceWorks has also made use of comparison reports. Böttcher said they’ve shown funders, for example, that not only is DanceWorks unique in its programming (which provides a laboratory for artistic growth to dancers, choreographers, and directors), but it is one of only a few organizations of its size to offer health insurance to dancers. Comparison reports also affirmed other things the company was doing well, despite its relative youth: its 80 individual donors reflect a larger funding pool than those of comparable, older organizations, and its web-based donations are 300% higher.<br />
<b><b><br />
</b></b>Other organizations have used the CDP to better track programmatic information, such as their most popular services or types of events. One such organization is the Rhode Island Historical Society, whose Executive Director, Morgan Grefe, used the CDP as an opportunity to make a greater commitment to program evaluation.  According to Grefe, having to complete the Data Profile made program evaluation “the thing you can’t ignore instead of the thing you always put off.” She credits the CDP process with the Society’s increased attentiveness to more qualitative program data, which has had an effect on the organization’s work.<br />
<b><b><br />
</b></b>For example, staff members started tracking how many visitors to different historic sites used those sites as a “visitor’s center,” asking for brochures and other tourist information, but not participating in a tour of the building.  Tracking such data enabled the Society to illustrate to public and private funders that the sites hold value to the tourist industry as well as to visitors concerned primarily with historic preservation. It also allows Grefe to make programmatic decisions, such as recruiting volunteers to fulfill visitor center functions or investing in more brochures.<br />
<b><b><br />
</b></b>An increasingly data-oriented attitude also led to the Society to launch the Rhode Island History Online Directory Initiative (RHODI). The <a href="http://rhodiproject.wordpress.com/">RHODI Project</a> will be “a comprehensive and detailed survey of Rhode Island’s history and heritage sector, [providing] not only trustworthy data on which to base future grant-funded proposals for such activities as collections cataloging, capacity building advice, preservation projects, educational programming, a virtual museum, but also the much needed impetus for synergies and collaborations.”<br />
<b><b><br />
Beyond the Numbers: Looking Ahead as The CDP Expands</b></b></p>
<p>As these examples indicate, the CDP has shown potential in establishing and tracking organizational success measures that can encourage stronger business operations, advocate for the arts, and guide grantmaking. In order for that success to solidify, however, the CDP must gain buy-in from a broader group of stakeholders, and address the fact that its current ability to gauge the arts’ “impact” is limited.<br />
<b><b><br />
</b></b>Of course, the CDP can’t be expected to track and measure everything related to arts and culture. Perhaps in recognition of this fact, and in an effort to integrate CDP data with other available resources, Southern Methodist University <a href="http://www.smu.edu/News/2013/arts-research-center-12feb2013">recently announced a partnership</a> with the CDP to launch the National Center for Arts Research (NCAR). Among other things, NCAR plans to launch an interactive “dashboard” where “arts leaders will be able to enter information about their organizations and see how they compare to the highest performance standards in areas such as community engagement, earned and contributed revenue and balance sheet health.” Assuming the effort manages to avoid redundancies with existing CDP infrastructure, it may fulfill its vision to become “a catalyst for the transformation and sustainability of the national arts and cultural community.”<br />
<b><b><br />
</b></b>At this point in the CDP’s history, it’s worth asking how it can further engage arts organizations, and what role it can play in evaluating more qualitative trends in cultural activities, such as audience loyalty and the evolution of programming. Since the organizations participating in the CDP range from zoos to art galleries, attempting to evaluate these aspects in any standardized way may be an exercise in insanity. What’s the dance education equivalent of a blockbuster museum exhibition? Nevertheless, there may be opportunities for the CDP to help organizations draw connections between CDP data and other reports, and to provide them with a broader toolkit of resources.<br />
<b><b><br />
</b></b>For example, for more than 20 years the League of American Orchestras has compiled a <a href="http://www.americanorchestras.org/knowledge-research-innovation/knowledge-center/surveys-reports-and-data/orchestra-repertoire-reports.html">report of repertoire played by its member orchestras</a>, including a list of the top 10 most frequently performed works, composers, and soloists. Though sadly only published in dense PDFs, one could track the repertoire of the top orchestras and determine if programming is, for example, becoming more conservative over time. Those same orchestras could make careful use of their own CDP data to make connections between programming choices and financial health.<br />
<b><b><br />
</b></b>The community and economic development sector, meanwhile, has attempted to build evaluative tools into a shared measurement system with a platform called<a href="http://www.successmeasures.org/"> Success Measures</a>,  which is smaller in scope but has certain similarities to the CDP.  Success Measures provides its members, most of whom are organizations with little experience or capacity for program evaluation, with a set of evaluative tools (surveys, interview protocols, observational checklists, and so forth).  They receive training on how to use the tools and then, using an online system similar to the CDP’s, upload, clean and analyze their data.<br />
<b><b><br />
</b></b>Participation in Success Measures is optional. By contrast, in each state where CDP operates, many funders require it as part of the application process, effectively enforcing participation. Given the CDP’s reach and the strong level of technical support it already provides, it is well poised to develop and disseminate additional tools. Suppose an organization was hoping to understand its audience retention rates, and had the option of accessing standardized survey tools as a CDP participant. The CDP would provide the organization with technical assistance in using the tools, in exchange for the survey results being added to the aggregate database. Such a system could not only strengthen the value of participating in the CDP, but also take a step toward a deeper understanding of “impact.” New tools &#8212; even optional ones &#8212; can motivate organizations to take a closer look at their activities, perhaps discovering how cultural choices affect their bottom line over time, or if their audience development efforts are truly paying off.<br />
<b><b><br />
</b></b>No matter how (and if) it chooses to expand, a few things about the CDP are clear:</p>
<ul>
<li>It has propelled an ongoing conversation about the economic role of arts and culture</li>
<li>It may yet build the capacity of the field as a whole if it more directly engages the arts and culture organizations it aims to serve</li>
<li>To do so, it needs to clarify and draw greater attention to the existing benefits of using the system, and stay alert to organizations’ needs moving forward</li>
</ul>
<p>With both for-profit and nonprofit sectors gravitating toward Big Data (and <a href="http://tech.fortune.cnn.com/2012/01/06/data-scientist-jobs/">seeking more and more people</a> to help them make sense of that data), the CDP can take a lead in ensuring that aggregate information is not only useful to researchers studying the arts and culture sector, but also to the organizations &#8212; large and small, vibrant and struggling &#8212; that are its engine.</p>
<p><em>(<a href="https://createquity.com/author/taliagibas">Talia Gibas</a> is Manager, Arts for All at the Los Angeles County Arts Commission and a past Createquity Writing Fellow. <a href="http://amandakeil.com/">Amanda Keil</a> is a mezzo-soprano and writer based in New York City.)</em></p>
]]></content:encoded>
			<wfw:commentRss>https://createquity.com/2013/03/the-cultural-data-project-and-its-impact-on-arts-organizations/feed/</wfw:commentRss>
		<slash:comments>12</slash:comments>
		</item>
		<item>
		<title>Solving the Underpants Gnomes Problem: Towards an Evidence-Based Arts Policy</title>
		<link>https://createquity.com/2013/02/solving-the-underpants-gnomes-problem-towards-an-evidence-based-arts-policy/</link>
		<comments>https://createquity.com/2013/02/solving-the-underpants-gnomes-problem-towards-an-evidence-based-arts-policy/#comments</comments>
		<pubDate>Mon, 25 Feb 2013 14:43:20 +0000</pubDate>
		<dc:creator><![CDATA[Ian David Moss]]></dc:creator>
				<category><![CDATA[Philanthropy]]></category>
		<category><![CDATA[Policy & Advocacy]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[ArtPlace]]></category>
		<category><![CDATA[Arts Ripple Effect]]></category>
		<category><![CDATA[ArtsWave]]></category>
		<category><![CDATA[Chicago]]></category>
		<category><![CDATA[Creative Class]]></category>
		<category><![CDATA[data]]></category>
		<category><![CDATA[evaluation]]></category>
		<category><![CDATA[grantmaking]]></category>
		<category><![CDATA[impact assessment]]></category>
		<category><![CDATA[logic models]]></category>
		<category><![CDATA[measurement in the arts]]></category>
		<category><![CDATA[Richard Florida]]></category>
		<category><![CDATA[risk]]></category>
		<category><![CDATA[strategy]]></category>
		<category><![CDATA[supply and demand]]></category>

		<guid isPermaLink="false">https://createquity.com/?p=4577</guid>
		<description><![CDATA[Arts research is broken. Here's how to fix it.]]></description>
				<content:encoded><![CDATA[<p>That&#8217;s the <a href="http://www.norc.org/NewsEventsPublications/Events/Pages/solving-the-underpants-gnomes-problem.aspx">title of a talk I presented</a> via the University of Chicago&#8217;s Cultural Policy Center on November 14, 2012. It&#8217;s long, but I think it&#8217;s one of the more significant things I&#8217;ve done recently and hope you&#8217;ll check it out if you have some time. The actual lecture portion of the talk occupies the first 52 minutes of the video, and it starts off with a recap/synthesis of material that will be familiar to regular readers of this blog (specifically, <a href="https://createquity.com/2012/05/creative-placemaking-has-an-outcomes-problem.html">Creative Placemaking Has an Outcomes Problem</a> and <a href="https://createquity.com/2012/06/in-defense-of-logic-models.html">In Defense of Logic Models</a>). Just shy of the 27-minute mark, though, I pivot and start laying out a diagnosis of how our arts research infrastructure is failing us, a vision for how we could fix it, and why it all matters &#8211; a lot.</p>
<div style="text-align: center;"><iframe loading="lazy" src="http://www.youtube.com/embed/kQD1zwdOv_0?rel=0" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></div>
<p>&nbsp;</p>
<p>Since I didn&#8217;t write out the speech in advance, I don&#8217;t have a transcript for it. However, below is a reconstruction of the new material from my notes, so you can get a taste for it if you don&#8217;t have time to watch the whole thing right now. (You&#8217;ll notice I make a number of generalizations in the speech about the ways in which arts practitioners interact with research. These are based on observation and personal experience, and are best understood as my working hypotheses.)</p>
<p>*</p>
<p>[starting at 26:55]</p>
<p>Why is this integration between data and strategy important? Because research<strong> is only valuable insofar as it influences decisions</strong>. This is why logic models are awesome – they are a visual depiction of strategy. And there is no such thing as strategy without cause and effect. Think about that for a second. Our lives can be understood as a set of circumstances and decisions. We make decisions to try to improve our circumstances, and sometimes the circumstances of those around us. Every decision you make is based on a prediction, whether explicitly articulated or not, about the results of that decision. Every decision, therefore, carries with it some degree of <i>uncertainty</i>. This uncertainty can be expressed another way: as an assumption about the way the world works and the context in which your decision is being made. These assumptions are distinguished from known facts.</p>
<p>If you can reduce the uncertainty associated with your assumptions, the chances that you will make the right decision will increase. So, how do you reduce that uncertainty? Through research, of course! Studying what has happened in the past can inform what is likely to happen in the future. Studying what has happened in other contexts can inform what is likely to happen in your context. And studying what is happening <i>now</i> can tell you whether your assumptions seem spot on or off by a mile. Alas, research and practice in our field are frequently disconnected in problematic ways. Six issues are preventing us from reaching our potential.</p>
<p><strong>Issue #1: Capacity</strong></p>
<p>Supply and demand apply as much to research <a href="https://createquity.com/2011/03/supply-is-not-going-to-decrease-so-its-time-to-think-about-curating.html">as it does to artists</a>. There are far more studies out there than a normal arts professional can possibly fully process. I wish I could tell you how many research reports are published in the arts each year, but nobody knows! To establish a lower bound, I went back over last year’s [2011] “<a href="https://createquity.com/tag/around-the-horn">around the horn</a>” posts, which report new research studies that I hear about. I counted at least 41 relevant arts-research-related publications – a tiny fraction, I’m sure, of total output. To make matters worse, research reports are long, and arts professionals are busy. For the <a href="https://createquity.com/about/createquity-writing-fellowship">Createquity Writing Fellowship program</a>, participants are required to analyze a work of arts research for the <a href="https://createquity.com/arts-policy-library">Createquity Arts Policy Library</a>. I collect data on how long it takes to do this, and consistently, it requires 30-80 hours to research, analyze and write just one piece! Multiply this by the number of new studies each year, and you can start to see the magnitude of the problem.</p>
<p><strong>Issue #2: Dissemination</strong></p>
<p>Which research reports is an arts practitioner likely to even know about? Certainly not all of them, because there is almost no meaningful connection between the academic research infrastructure and the professional arts ecosystem. Lots of research relevant to the arts is published in academic journals each year, but unless the faculty member was commissioned to do their work by a foundation, we never hear about it. Academic papers are typically behind a pay firewall, and most arts organizations don’t have journal subscriptions. To give an example, after I <a href="https://createquity.com/2009/04/deconstructing-richard-florida.html">wrote about Richard Florida’s <em>Rise of the Creative Clas</em>s</a>, Florida <a href="https://createquity.com/2009/05/richard-florida-responds.html">pointed me</a> to a <a href="https://createquity.com/2009/05/reconstructing-florida.html">study in two parts by two Dutch researchers</a>. It’s one of the best resources I’ve come across for creative class theory, but I’ve never heard anyone even mention either study other than him and me.</p>
<p><strong>Issue #3: Interpretation</strong></p>
<p>Research reports inevitably reflect the researcher’s voice and agenda. This is especially true of executive summaries and press releases, which is often all anyone &#8220;reads&#8221; of research &#8220;reports.&#8221; Probably the most common agenda, of course, is to convey that the researcher knows what he/she is talking about. Another common agenda is to ensure repeat business from, or at least a continuing relationship with, the client who commissioned the study. The reality, however, is that research varies widely in quality. There&#8217;s no certification process; anyone can call themselves a researcher. But even highly respected professionals can make mistakes, pursue questionable methods, or overlook obvious holes in their logic. And, in my experience, the reality of any given research effort is usually nuanced – some aspects of it are much more valuable than others. Unfortunately, many arts professionals lack expertise to properly evaluate research reports, not having had even basic statistics training.</p>
<p><strong>Issue #4: Objectivity</strong></p>
<p>Research is about uncovering the truth, but sometimes people don’t want to know the truth. Advocacy goals often precede research. How many times have you heard somebody say a version of the following: “We need research to back this up”? That statement suggests a kind of research study that we see all too often: one that is conducted to affirm decisions that have already been made. By contrast, when we create a logic model, we start with the end first: we identify what we are trying to achieve and only then determine the activities necessary to achieve it.</p>
<p>Here are a bunch of bad, but common reasons to do a research project:</p>
<ul>
<li>To prove your own value.</li>
<li>To increase your organization’s prestige.</li>
<li>To advance an ideological agenda.</li>
<li>To provide political cover for a decision.</li>
</ul>
<p>There is only <em>one</em> good reason to do research, and that is to try to find out something you didn’t know before.</p>
<p><strong>Issue #5: Fragmentation</strong></p>
<p>The worst part of the problem I just described is that it drives what research gets done – and what doesn’t get done. There is no common research agenda adopted by the entire field, which is a shame, because collective knowledge is pretty much the definition of a public good: if I increase my own knowledge, it’s very easy for me to increase your knowledge too. The practical consequences of this fragmentation are severe. It results in a concentration of research using readily available data sources (ignoring the fact that the creation of new data sources may be more valuable). It results in a concentration of research in geographies and communities that can afford it, because people don’t often pay for research that’s not about them. And it results in a concentration of research serving narrow interests: discipline-specific, organization-specific, methodology-specific. My biggest pet peeve is that research is <em>almost never intentionally replicated</em> – everybody’s reinventing the wheel, studying the same things over and over again in slightly different ways. A great example of a research study crying out for replication is the <a href="http://www.theartswave.org/sites/default/files/pdfs/The%20Arts%20Ripple%20Report,%20January%202010.pdf">Arts Ripple Effect report</a>, which I talked about earlier. The results of that study are now guiding the distribution of millions of dollars in annual arts funding. Are those results universal, or unique to the Greater Cincinnati region? We have no way to know.</p>
<p><strong>Issue #6: Allocating resources</strong></p>
<p>Everyone knows there&#8217;s been a trend in recent years towards more and more data collection at the level of the organization or artist. Organizations, especially small ones, complain all the time about being expected to do audience surveys, submit onerous paperwork, and so forth. And you know what, I agree with them! You might be surprised to hear me say that, but when you&#8217;re talking about organizations that have small budgets, no expertise to do this kind of work, and the funder who is requesting the information is not providing any assistance to get it&#8230;just take a risk! You make a small grant that goes bad, so what? You’re out a few thousand dollars. The sun will rise tomorrow.</p>
<p>As an example of what I&#8217;m talking about, I <a href="https://createquity.com/2012/10/live-from-cleveland-arts-philanthropy-in-action.html">participated in a grant panel recently</a>. I enjoyed the experience, and am glad I did it, but there&#8217;s one aspect of the experience that is relevant here. There were seven panelists, and we were all from out of town. Each of us spent, I&#8217;d say, roughly 40 hours reviewing applications in advance of the panel itself. Then we all got together for two full days in person to review these grants some more and talk about them and score them. We did this for 64 applications for up to $5,000 each, and in the end, <del>92%</del> 94% were funded.</p>
<p>So consider this as a research exercise. The decision is who to give grants to, and how much. The data is the grant applications. The researchers are the review panel. <em>What uncertainty is being reduced by this process?</em> How much worse would the outcome have been if we’d just taken all the organizations, put them into Excel, run a random number generator, and distributed the dollars randomly up to $5,000 per organization? And I&#8217;m not saying this to make fun of this particular organization or single them out, because honestly it&#8217;s not uncommon to take this kind of approach to small-scale grantmaking. And yet if you compare it to <a href="http://www.artplaceamerica.org/articles/artplace-announces-grants/">ArtPlace’s first round of grants</a>, theoretically they had thousands of projects to choose from, and they gave grants up to $1 million for creative placemaking projects – but there was no [open] review process; they just chose organizations to give grants to. So there&#8217;s a bit of a mismatch in the strategies we use to decide how to allocate resources.</p>
<p>There’s a concept called “expected value of information” described in a wonderful book called <a href="http://www.amazon.com/How-Measure-Anything-Intangibles-Business/dp/1452654204"><em>How to Measure Anything</em></a>, by Douglas W. Hubbard. It’s a way of taking into account how much information matters to your decision-making process. In the book, Hubbard shares a couple of specific findings from his work as a consultant. He found that most variables have an information value of zero; in other words, we can study them all we want, but whatever the truth is is not going to change what we do, because they don&#8217;t matter enough in the grand scheme of things. And he also found that the things that matter the most, the kinds of things that really would change our decisions, often aren&#8217;t studied, because they&#8217;re perceived as too difficult to measure. So we need to ask ourselves how new information would actually change the decisions we make.</p>
<p>There is so much untapped potential in arts research. But it remains untapped because of all the issues described above. So what can we do about it?</p>
<p>First, <strong>we need a major field-building effort for arts research</strong>. Connecting researchers with each other through a virtual network/community of practice would help a lot. So would a centralized clearinghouse where all research can live, even if it’s behind a copyright firewall. The good news is that the National Endowment for the Arts has already been making some moves in this direction. The Endowment published a monograph a couple of months ago called “<a href="http://www.nea.gov/research/How-Art-Works/How-Art-Works.pdf">How Art Works</a>,” the major focus of which was a so-called &#8220;system map&#8221; for the arts. But the document also had a pretty detailed research agenda for the NEA, not for the entire field, that lays out what the NEA&#8217;s Office of Research and Analysis is going to do over the next five years, and two of the items mentioned are exactly the two things I just talked about: a virtual research network and a centralized clearinghouse for arts research.</p>
<p>This new field that we&#8217;re building should be <strong>guided by a national research agenda that is collaboratively generated and directly tied to decisions of consequence</strong>. The missing piece from the research agenda in “How Art Works” is the tie to actual decisions. Instead it has categories, like cultural participation, and research projects can be sorted under those buckets. But it&#8217;s not enough for research to simply be about something &#8211; research should serve some purpose. What do we actually need to know in order to do our jobs better?</p>
<p>We should be asking researchers to spend <strong>less time generating new research and more time critically evaluating other people’s research</strong>. We need to generate lots more discussion about the research that is already produced. That’s the only way it’s going to enter the public consciousness. Each time we fail to do that, we are missing out on opportunities to increase knowledge. It will also raise our collective standards for research if we are engaging in a healthy debate about it. But realistically, in order for this to happen, field incentives are going to have to change – analyzing existing research will need to be seen as equally prestigious and worthy of funding as creating a new study. Of course, I would prefer if people are not evaluating the work of their direct competitors – but I’ll take what I can get at this point!</p>
<p><strong>Every research effort should take into account the expected value of the information it will produce</strong>. Consider the risk involved in various types of grants made. What are you trying to achieve by giving out lots of small grants, if that&#8217;s what you&#8217;re doing? Maybe measure the effectiveness of the overall strategy instead of the success or failure of each grant. This is getting into hypothesis territory, but based on what I&#8217;ve seen so far I would guess that research on <i>grant strategy</i> is woefully underfunded, while research on the effectiveness or potential of <i>specific grants</i> is probably overfunded. We probably worry more than we need to about individual grants, but we don&#8217;t worry as much as we should about whether the ways in which we&#8217;re making decisions about which grants to support are the right ways to do that.</p>
<p>Finally, we should be <strong>open-sourcing research and working as a team</strong>. I&#8217;m talking about sharing not just finished products and final reports, but plans, data, methodologies as well. I&#8217;m talking about seeking multiple uses and potential partners at every point for the work we’re doing. This would make our work more effective by allowing us to leverage each other’s strengths &#8211; we’re not all experts at everything, after all! And it would cut down on duplicated effort and free up expensive people’s time to do work that moves the field forward.</p>
<p>I thank everyone for their time, and I&#8217;d love to take any questions or comments on these thoughts about the state of our research field.</p>
]]></content:encoded>
			<wfw:commentRss>https://createquity.com/2013/02/solving-the-underpants-gnomes-problem-towards-an-evidence-based-arts-policy/feed/</wfw:commentRss>
		<slash:comments>2</slash:comments>
		</item>
		<item>
		<title>Economies and Diseconomies of Scale in the Arts &#8211; Take Two</title>
		<link>https://createquity.com/2012/12/economies-and-diseconomies-of-scale-in-the-arts-take-two/</link>
		<comments>https://createquity.com/2012/12/economies-and-diseconomies-of-scale-in-the-arts-take-two/#comments</comments>
		<pubDate>Thu, 06 Dec 2012 14:21:23 +0000</pubDate>
		<dc:creator><![CDATA[Ian David Moss]]></dc:creator>
				<category><![CDATA[Economy]]></category>
		<category><![CDATA[Philanthropy]]></category>
		<category><![CDATA[impact assessment]]></category>
		<category><![CDATA[institutions]]></category>
		<category><![CDATA[scale]]></category>

		<guid isPermaLink="false">https://createquity.com/?p=4189</guid>
		<description><![CDATA[(The following post is part of a weeklong salon at ARTSBlog on the subject of &#8220;Does Size Matter?&#8221; The entire salon is worth checking out, and former Createquity Writing Fellow Katherine Gressel has an entry as well.) How does scale influence impact in the arts? In 2007, back when I was a fresh-faced grad student,<a href="https://createquity.com/2012/12/economies-and-diseconomies-of-scale-in-the-arts-take-two/" class="read-more">Read&#160;More</a>]]></description>
				<content:encoded><![CDATA[<p><em>(The following post is part of a <a href="http://blog.artsusa.org/2012/12/04/economies-and-diseconomies-of-scale-in-the-arts/">weeklong salon at ARTSBlog</a> on the subject of &#8220;Does Size Matter?&#8221; The entire salon is <a href="http://blog.artsusa.org/tag/december-2012-blog-salon/">worth checking out</a>, and former Createquity Writing Fellow Katherine Gressel <a href="http://blog.artsusa.org/2012/12/04/scaling-up-participation-the-expansion-of-figment/">has an entry</a> as well.)</em></p>
<p>How does scale influence impact in the arts? In 2007, back when I was a fresh-faced grad student, I actually addressed this question head on in the <a href="https://createquity.com/2007/11/thoughts-on-effective-philanthropy-part_20.html">eighth post ever published on Createquity</a>. I argued pretty strongly that scale in the arts was a myth, or at least not salient to the same extent as in other fields:</p>
<blockquote><p>It’s not that I don’t think large arts organizations do good work, or that they don’t deserve to be supported. What I’m going to argue instead is that there is a tendency among many institutional givers to direct their resources toward organizations that have well-developed support infrastructure, long histories, and vast budgets, and in a lot of ways it’s a tendency that doesn’t make much sense (or at the very least, could use some balance).</p>
<p>For one thing, those well-developed support infrastructures don’t come cheap. Consider the case of Carnegie Hall… [snip]</p>
<p>In contrast, small arts organizations are <strong>extraordinarily </strong>frugal with their resources, precisely because they have no resources to speak of. It’s frankly amazing to me what largely unheralded art galleries, musical ensembles, theater companies, dance troupes, and performance art collectives are able accomplish with essentially nothing but passion on their side. A $5,000 contribution that would barely get you into the <a href="http://www.carnegiehall.org/article/support_the_hall/patrons/index.html">sixth-highest donor category</a> at Carnegie might radically transform the livelihood of an organization like this. Suddenly, they might be able to buy some time in the recording studio, or hire an accompanist for rehearsals, or redo that floor in the lobby, or even (gasp) PAY their artists! All of which previously had seemed inconceivable because of the poverty that these organizations grapple with.</p></blockquote>
<p>The literature on scaling impact in the social sector tends to take for granted that scale is a good thing—that services are provided more effectively when centralized under a strong leader and when efficiencies can be exploited across functions and sites. This logic makes sense when the goal is to solve a systemic problem that is evident in many different  contexts, such as physical places. If you’ve come up with a solution that works in Chicago, why wouldn’t you want to bring it to New York and DC? Arts service organizations, in fact, can likely benefit from economies of scale. <a href="http://www.fracturedatlas.org/">Fractured Atlas</a> has certainly been able to accomplish a lot more because its focus is national and cross-disciplinary than would have been the case otherwise, and scale has no doubt been a motivating factor behind Americans for the Arts’s many mergers.</p>
<p>But when you get to talking about arts producers and presenters, which I think is what most people mean when they say “the arts,” the conversation about scale becomes very different. What problem, exactly, is being solved here? It seems like the whole point of the nonprofit arts is to add to the aesthetic diversity that would otherwise exist in the marketplace for creative expression. If the point is diversity, how is that goal served by attempting to scale up institutions? The very commercial marketplace to which the nonprofit arts strive to provide an alternative <em>loves </em>scale – it thrives on it, because scale begets market power, which begets revenue, which begets profit. (Profits worth talking about, anyway.)</p>
<p>Leaving our cultural lives in the hands of commercial entities, many theorists have worried, will result in a boring sameness, an attempt to feed the world’s aesthetic appetite with the equivalent of TV dinners every day.* Our sector takes it on faith that there are forms of artistic expression that have clear cultural value and relevance even if replicating them widely is not practical. I suppose if you believe that these forms are specific and identifiable in nature (e.g., classical music, plays by Henrik Ibsen), then scaling them to help them compete with commercial cultural products would make sense. But if you believe, as I do, that their value comes in large part from the diversity they add to our collective palate, it’s much better to spread the subsidy around.</p>
<p>On a purely theoretical level, my view hasn’t changed that much in the five years since I wrote that thought piece. However, having become more closely involved with several grantmakers (including serving on a couple of grant panels) since then, I’ve developed a newfound appreciation for what large organizations can accomplish with scale. The scale that institutions traffic in does not have to do with the creation or presentation of work, but rather the audiences reached by that work. There are arts consumers – plenty of them, in fact – who simply will never frequent a show or exhibition by a smaller, experimental group or venue unless they personally know someone in it. But give that experimental group an institution’s stamp of approval, and those audience members are all over it. That’s got to count for something, and speaks volumes of the <a href="https://createquity.com/2011/03/supply-is-not-going-to-decrease-so-its-time-to-think-about-curating.html">curatorial role that large institutions have in the broader ecosystem</a>.</p>
<p>That said, one thing I still don’t see much of on the part of arts funders is a willingness to consider the transformative potential (or lack thereof) of grants. Some years ago, the Hewlett Foundation developed a simple yet very clever rubric for grant selection called <a href="http://www.hewlett.org/uploads/files/Making_Every_Dollar_Count.pdf">Expected Return</a>. One of the ways in which Expected Return is clever is that it accounts for the proportion of a project’s success in an ideal world that can be attributed to the grant you made. The less of the budget you’re responsible for, the less of a difference you’re really making. As I wrote then and still believe now, “Foundations concerned with ‘impact’ should remember that it’s far easier to have a measurable effect on an organization’s effectiveness when the amount of money provided is not dwarfed by the organization’s budget.”</p>
<p>&nbsp;</p>
<p><em>*Defenders of pop culture will no doubt cite the many creative achievements of the entertainment industry as evidence against this point – and they are certainly right to celebrate </em>The Wire<em>, Radiohead, and </em>The Lord of the Rings<em>. But for every groundbreaking artist who succeeds in the market economy, there are dozens more who don’t, and plenty of mediocre talents gumming up our headphones and screens instead.</em></p>
]]></content:encoded>
			<wfw:commentRss>https://createquity.com/2012/12/economies-and-diseconomies-of-scale-in-the-arts-take-two/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>Fuzzy Concepts, Proxy Data: Why Indicators Won’t Track Creative Placemaking Success</title>
		<link>https://createquity.com/2012/11/fuzzy-concepts-proxy-data-why-indicators-wont-track-creative-placemaking-success/</link>
		<comments>https://createquity.com/2012/11/fuzzy-concepts-proxy-data-why-indicators-wont-track-creative-placemaking-success/#comments</comments>
		<pubDate>Fri, 09 Nov 2012 14:30:54 +0000</pubDate>
		<dc:creator><![CDATA[Ann Markusen]]></dc:creator>
				<category><![CDATA[Economy]]></category>
		<category><![CDATA[Philanthropy]]></category>
		<category><![CDATA[Policy & Advocacy]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[Ann Markusen]]></category>
		<category><![CDATA[ArtPlace]]></category>
		<category><![CDATA[creative placemaking]]></category>
		<category><![CDATA[cultural equity]]></category>
		<category><![CDATA[economic development and the arts]]></category>
		<category><![CDATA[evaluation]]></category>
		<category><![CDATA[impact assessment]]></category>
		<category><![CDATA[measurement in the arts]]></category>
		<category><![CDATA[NEA]]></category>
		<category><![CDATA[venture philanthropy]]></category>

		<guid isPermaLink="false">https://createquity.com/?p=4090</guid>
		<description><![CDATA[One of creative placemaking's original champions explains why she can't get behind the field's latest measurement efforts.]]></description>
				<content:encoded><![CDATA[<div style="width: 510px" class="wp-caption aligncenter"><a href="http://www.flickr.com/photos/b-love/2870639740/"><img loading="lazy" decoding="async" class=" " title="Fuzzy concept" src="http://farm4.staticflickr.com/3109/2870639740_b88b8433e1.jpg" alt="" width="500" height="333" /></a><p class="wp-caption-text">&#8220;There is nothing worse than a sharp image of a fuzzy concept.&#8221; -Ansel Adams<br />Photo by beast love</p></div>
<p><em>(If you don&#8217;t know the name Ann Markusen, you should. As professor and <a href="http://www.hhh.umn.edu/people/amarkusen/">director of the Project on Regional and Industrial Economics</a> at the University of Minnesota Humphrey School of Public Affairs, Ann has become one of the most respected and senior voices in the arts research community over the past decade. Among her best-known recent efforts was her authorship, with Anne Gadwa Nicodemus, of the original <a href="http://www.nea.gov/pub/CreativePlacemaking-Paper.pdf">Creative Placemaking white paper</a> published by the NEA prior to the creation of the Our Town grant program and ArtPlace funder collaborative. So when she approached me to offer a guest post on evaluation challenges for creative placemaking, building on <a href="https://createquity.com/2012/05/creative-placemaking-has-an-outcomes-problem.html">previous coverage of the topic</a> here at Createquity, I could hardly say no. I hope you enjoy Ann&#8217;s piece and I look forward to the vigorous discussion it will no doubt spark. -IDM)</em></p>
<p>*</p>
<p>Creative placemaking is electrifying communities large and small around the country. Mayors, public agencies and arts organizations are finding each other and committing to new initiatives. That’s a wonderful thing, whether or not their proposals are funded by national initiatives such as the National Endowment for the Arts’s <a href="http://www.nea.gov/national/ourtown/index.php">Our Town program</a> or <a href="http://www.artplaceamerica.org/">ArtPlace</a>.</p>
<p>It’s important to learn from and improve our practices on this new and so promising terrain. But efforts based on fuzzy concepts and indicators designed to rely on data external to the funded projects are bound to disappoint. Our evaluative systems must nurture rather than discourage the marvelous moving of arts organizations, artists and arts funders out of their bunkers and into our neighborhoods as leaders, animators, and above all, exhibitors of the value of arts and culture.</p>
<p>In our 2010 <a href="http://metrisarts.com/wp-content/uploads/2012/06/CreativePlacemaking-Full-Report.pdf"><em>Creative Placemaking </em>white paper for the NEA</a>, Anne Gadwa Nicodemus and I characterize creative placemaking as a process where “partners&#8230; shape the physical and social character of a neighborhood, town, city, or region around arts and cultural activities.” A prominent ambition, we wrote, is to “bring diverse people together to celebrate, inspire, and be inspired.”  Creative placemaking also “animates public and private spaces, rejuvenates structures and streetscapes, (and) improves local business viability and public safety,” but arts and culture <em>are at its core. </em>This definition suggests a number of distinctive arenas of experimentation, where the gifts of the arts are devoted to community liveliness and collaborative problem-solving and where new people participate in the arts and share their cultures.</p>
<p>And, indeed, Our Town and ArtPlace encourage precisely this experimental ferment. Like the case studies in <em>Creative Placemaking</em>, each funded project is unique in its artistic disciplines, scale, problems addressed and aspirations for its particular place. Thus, a good evaluation system will monitor the progress of each project team towards its stated goals, including revisions made along the way. NEA’s Our Town asks grant-seekers to describe how they intend to evaluate their work, and ArtPlace requires a monthly blog entry. But rather than more formally evaluate each project’s progress over time, both funders have developed and are compiling place-specific measures based on external data sources that they will use to gauge success: the <a href="https://www.fbo.gov/index?s=opportunity&amp;mode=form&amp;id=39f0ca2bec49a35d83076660a0b76992&amp;tab=core&amp;_cview=1">Arts and Livability Indicators</a>  in the case of the NEA, and what ArtPlace is calling its <a href="http://www.artplaceamerica.org/articles/vibrancy-indicators/">Vibrancy Indicators</a>.</p>
<p>Creative placemaking funders are optimistic about these efforts and their usefulness. “Over the next year or two,” <a href="http://artworks.arts.gov/?p=13382">wrote Jason Schupbach</a>, NEA’s Director of Design, last May, “we will build out this system and publish it through a website so that anyone who wants to track a project’s progress in these areas (improved local community of artists and arts organizations, increased community attachment, improved quality of life, invigorated local economies) will be able to do so, whether it is NEA-funded or not. They can simply enter the time and geography parameters relevant to their project and see for themselves.”</p>
<p>Over the past two years, I have been consulting with creative placemaking leaders and given talks to audiences in many cities and towns across the country and abroad. Increasingly, I am hearing distress on the part of creative placemaking practitioners about the indicator initiatives of the National Endowment for the Arts and ArtPlace. At the annual meetings of the National Alliance for Media Arts and Culture last month, my fellow Creative Placemaking panel members, all involved in one or more ArtPlace- or Our-Town-funded projects, expressed considerable anxiety and confusion about these indicators and how they are being constructed. In particular, many current grantee teams with whom I’ve spoken are baffled by the one-measure-fits-all nature of the indicators, especially in the absence of formal and case-tailored evaluation.</p>
<p>I’ll confess I’m an evidence gal. I fervently believe in numbers where they are a good measure of outcomes; in secondary data like Census and the National Center for Charitable Statistics where they are up to the task; in surveys where no such data exist; in case studies to illuminate the context, process, and the impacts people tangibly experience; in interviews to find out how actors make decisions and view their own performance. My own work over the past decade is <a href="http://www.hhh.umn.edu/projects/prie/PRIE--publications.html">riddled with examples of these practices</a>, including appendices intended to make the methodology and data used as transparent as possible.</p>
<p>So I embrace the project of evaluation, but am skeptical of relying on indicators for this purpose. In pursuing a more effective course, we can learn a lot from private sector venture capital practices, the ways that foundations conduct grantee evaluations, and, for political pitfalls, defense conversion placemaking experiments of the 1990s.</p>
<p>&nbsp;</p>
<p><strong>Learning from Venture Capital and Philanthropy</strong></p>
<p>How do private sector venture capital (VC) firms evaluate the enterprises they invest in? Although they target rates of return in the longer run, they not do resort to indicators based on secondary data to evaluate progress. They closely monitor their investees—small firms who often have little business experience, just as many creative placemaking teams are new to their terrain. VC firms play an active role in guiding youthful companies, giving them feedback germane to their product or service goals. They help managers evaluate their progress and bring in special expertise where needed.</p>
<p>Venture capital firms are patient, understanding realistic timelines. The rule of thumb is that they commit to five to seven years, though it may be less or more. Among our <em>Creative Placemaking</em> cases, few efforts succeeded in five years, while some took ten to fifteen years.</p>
<p>VC firms know that some efforts will fail. They are attentive to learning from such failures and sharing what they learn in generic form with the larger business community. Both ArtPlace and the NEA have stated their desire to learn from success and failure. Yet generic indicators, their chosen evaluation tools, are neither patient or tailored to specific project ambitions. Current Our Town and ArtPlace grant recipients worry that the 1-2 years of funding they’re getting won’t be enough to carry projects through to success or establish enough local momentum to be self-sustaining. Neither ArtPlace nor Our Town have a realistic exit strategy in place for their investments, other than “the grant period’s over, good luck!”</p>
<p>Hands-on guidance is not foreign to nonprofit philanthropies funding the arts.  Many arts program officers act as informal consultants and mentors to young struggling arts organizations and to mature ones facing new challenges. My study with Amanda Johnson of <a href="http://www.hhh.umn.edu/centers/prie/pdf/artists_centers.pdf"><em>Artists&#8217; Centers</em></a> shows how Minnesota funders have played such roles for decades. They ask established arts executive directors to mentor new start-ups, a process that the latter praised highly as crucial to their success. The Irvine and Hewlett Foundations are currently funding California nonprofit intermediaries <a href="http://www.giarts.org/article/working-small-arts-organizations">to help small, folk and ethnic organizations use grant monies wisely</a>. They also pay for intermediaries across sectors (arts and culture, health, community development and so on) to meet together to learn what works best.</p>
<p>The NEA has hosted three webinars at which Our Town panelists talk about what they see as effective projects/proposals, a step in this direction. But these discussions are far from a systematic gathering and collating of experience from all grantees in ways that would help the cohorts learn and contact those with similar challenges.</p>
<p>&nbsp;</p>
<p><strong>The Indicator Impetus</strong></p>
<p>Why are the major funders of creative placemaking staking so much on indicators rather than evaluating projects on their own aspirations and steps forward? Pressure from the Office of Management and Budget, the federal bean-counters, is one factor.  In January of 2011, President Obama signed into law the Government Performance and Modernization Act (GPRA), <a href="http://www.whitehouse.gov/omb/mgmt-gpra/index-gpra">updating the original 1993 GPRA</a>, and a new August 2012 Circular A11 <a href="www.whitehouse.gov/sites/default/files/omb/assets/a11_current_year/s200.pdf">heavily emphasizes use of performance indicators</a> for all agencies and their programs<em>.</em></p>
<p>As a veteran of research and policy work on scientific and engineering occupations and on industrial sectors like steel and the military industrial complex, I fear that others will perceive indicator mania as a sign of field weakness. To Ian David Moss’s provocative title “<a href="https://createquity.com/2012/05/creative-placemaking-has-an-outcomes-problem.html">Creative Placemaking has an Outcomes Problem</a>,” I’d reply that we’re in good company. Huge agencies of the federal government, like the National Science Foundation, the National Institutes of Health and NASA, fund experiments and exploratory development without asking that results be held up to some set of external indicators not closely related to their missions. They accept slow progress and even failure, as in cancer research or nuclear fusion, because the end goal is worthy and because we learn from failure. Evaluation by external generic indicators fails to acknowledge the experimental and ground-breaking nature of these creative-placemaking initiatives and misses an opportunity to bolster understanding of how arts and cultural missions create public value.</p>
<p>&nbsp;</p>
<p><strong>Why Indicators Will Disappoint I: Definitional Challenges</strong></p>
<p>Many of the indicators charted in ArtPlace, NEA Our Town, and other exercises (e.g. WESTAF’s <a href="https://cvi.westaf.org/">Creative Vitality Index</a>) bear a tenuous relationship to the complex fabric of communities or specific creative placemaking initiatives. Terms like “vitality,” “vibrancy,” and “livability” are great examples of fuzzy concepts, a notion that I <a href="citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.117.6828.pdf">used a decade ago</a> to critique planners and geographers’ enamoration with concepts like “world cities” and “flexible specialization.” A fuzzy concept is one that means different things to different people, but flourishes precisely because of its imprecision. It leaves one open to trenchant critiques, as in Thomas Frank’s <a href="http://www.thebaffler.com/past/dead_end_on_shakin_street">recent pillorying of the notion of vibrancy</a>.</p>
<p>Take livability, for instance, <a href="https://www.fbo.gov/index?s=opportunity&amp;mode=form&amp;id=39f0ca2bec49a35d83076660a0b76992&amp;tab=core&amp;_cview=1">prominent in the NEA’s indicators project</a>. One person’s quality of life can be inimical to others’. Take the young live music scene in cities: youth magnet, older resident nightmare.  Probably no worthy concept, as quality of life is, has been the subject of so many disappointing and conflicting measurement exercises.</p>
<p>Just what does vibrancy mean? Let’s try to unpack the term. <a href="http://www.artplaceamerica.org/loi/">ArtPlace’s definition</a>: “we define vibrancy as places with an unusual scale and intensity of specific kinds of human interaction.” Pretty vague and&#8230;.vibrancy are places?  Unusual scale? Scale meaning extensive, intensive? Of specific kinds? What kinds? This definition is followed by: “While we are not able to measure vibrancy directly, we believe that the measures we are assembling, taken together, will provide useful insights into the nature and location of especially vibrant places within communities.”  If I were running a college or community discussion session on this, I would put the terms “vibrancy, places, communities, measures,” and so on up on the board (so to speak), and we would undoubtedly have a spirited and inconclusive debate!</p>
<p>And what is the purpose of measuring vibrancy? Again from the same ArtPlace LOI: “…the purpose of our vibrancy metrics is not to pronounce some projects ‘successes’ and other projects ‘failures’ but rather to learn more about the characteristics of the projects and community context in which they take place which leads to or at least seems associated with improved places.” Even though the above description mentions “characteristics of the projects,” it’s notable that their published vibrancy indicators only measure features of place.</p>
<p>In fact, many of the ArtPlace and NEA indicators are roughly designed and sometime in conflict. While giving the nod to “thriving in place,” ArtPlace emphasizes the desirability of visitors in its vibrancy definition (meaning outsiders to the community); by contrast, the NEA prioritizes social cohesion and community attachment, attributes scarce in the ArtPlace definitions. For instance, ArtPlace proposes to use employment ratio—“the number of employed residents living in a particular geography (Census Block) and dividing that number by the working age persons living on that same block” as a measure of people-vibrancy. The rationale: “vibrant neighborhoods have a high fraction of their residents of working age who are employed.” Think of the large areas of new non-mixed use upscale high-rise condos where the mostly young professional people living there commute daily to jobs and nightly to bars and cafes outside the neighborhood. Not vibrant at all. But such areas would rank high using this measure.</p>
<p>ArtPlace links vibrancy with diversity, defined as heterogeneity of people by income, race and ethnicity. They propose “the racial and ethnic diversity index” (composition not made explicit) and “the mixed-income, middle income index” (ditto) to capture diversity. But what about age diversity? Shouldn’t we want intergenerational activity and encounters too? It is also problematic to prioritize the dilution of ethnicity in large enclaves of recent immigrant groups. Would a thriving heavily Vietnamese city or suburb be considered non-vibrant because its residents choose to live and build their cultural institutions there, facing discrimination in other housing markets? Would an ethnic neighborhood experiencing white hipster incursions be evaluated positively despite decline in its minority populations that result from lower income people being forced out?</p>
<p>Many of the NEA’s indicators are similarly fuzzy. As an indicator of impact on art communities and artists, <a href="https://www.fbo.gov/index?s=opportunity&amp;mode=form&amp;id=39f0ca2bec49a35d83076660a0b76992&amp;tab=core&amp;_cview=1">its August 2012 RFP</a> proposes median earnings for residents employed in entertainment-related industries (arts, design, entertainment, sports, and media occupations). But a very large number of people in these occupations are in sports and media fields, not the arts. The measure does not include artists who live outside the area but work there. And many artists self-report their industry as other than the one listed above, e.g. musicians work in the restaurant sector, and graphic artists work in motion pictures, publishing and so on. ArtPlace is proposing to use very similar indicators—creative industry jobs and workers in creative occupations—as measures of vibrancy.</p>
<p>It is troubling that neither indicator-building effort has so far demonstrated a willingness to digest and share publicly the rich, accessible, and cautionary published research that tackles many of these definitions. See for instance &#8220;<a href="http://edq.sagepub.com/content/22/1/24.abstract">Defining the Creative Economy: Industry and Occupational Approaches</a>,&#8221; the joint effort by researchers Doug DeNatale and Greg Wassall from the New England Creative Economy Project, Randy Cohen of Americans for the Arts, and me at the Arts Economy Initiative to unpack the definitional and data challenges for measuring arts-related jobs and industries in <em>Economic Development Quarterly.</em></p>
<p>Hopefully, we can have an engaging debate about these notions before indices are cranked out and disseminated. Heartening signs: in its August RFP, the NEA backtracks from its original plan, unveiled in a spring 2012 webinar, to contract for wholesale construction of a given set of indicators to be distributed to grantees. Instead, it is now contracting for the testing of indicator suitability by conducting twenty case studies. And just last week, the NEA <a href="https://www.fbo.gov/?s=opportunity&amp;mode=form&amp;id=3e198017f18bfd723f557702a7b46bca&amp;tab=core&amp;_cview=1">issued a new RFP for developing a virtual storybook</a> to document community outcomes, lessons learned and experiences associated with their creative placemaking projects.</p>
<p>&nbsp;</p>
<p><strong>Why Indicators Will Disappoint II: Dearth of Good Data</strong></p>
<p>If definitional problems aren’t troubling enough, think about the sheer inadequacy of data sources available for creating place-specific indicators.</p>
<p>For more than a half-century, planning and economic development scholars have been studying places and policy interventions to judge success or failure. Yet when Anne Gadwa Nicodemus went in search of research results on decades of public housing interventions, assuming she could build on these for her evaluation of Artspace Projects’ artist live/work and studio buildings, she found that they don’t really exist.</p>
<p>Here are five serious operational problems confronting creative placemaking indicator construction. First, the dimensions to be measured are hard to pin down. Some of the variables proposed are quite problematic—they don’t capture universal values for all people in the community.</p>
<p>Take ArtPlace’s <a href="http://www.artplaceamerica.org/articles/vibrancy-indicators/">cell phone activity indicator</a>, for instance, which will be used on nights and weekends to map where people congregate. Are places with cell activity to be judged as more successful at creative placemaking? Cell phone usage is heavily correlated with age, income and ethnicity. The older you are, the less likely you are to have a cell phone or use it much, and the more likely to rely on land-lines, which many young people do without. At the November 2012 American Collegiate Schools of Planning annual meetings, Brettany Shannon of University of Southern California presented research results from a survey of 460 LA bus riders showing low cell phone usage rates among the elderly, particularly Latinos. Among those aged 18-30, only 9% of English speakers and 15% of Spanish speakers had no cell phone, compared with 29% of English speakers over age 50 and 54% of Spanish speakers.  A cell phone activity measure is also likely to completely miss people attending jazz or classical music concerts, dramas, and religious cultural events where cell phones are turned off. And what about all those older folks who prefer to sit in coffee shops and talk to each other during the day, play leadership roles in the community through face-to-face work, or meet and engage in arts and cultural activities around religious venues? Aren’t they congregating, too?</p>
<p>Or take home ownership and home values, an indicator the NEA hopes to use. Hmmm… home ownership rates—and values—in the US have been falling, in large part due to overselling of homes during the housing bubble. Renting is a just as respectable an option for place lovers, especially young people, retirees, and lower-income people in general. Why would we want grantees to aspire to raise homeownership rates in their neighborhoods, especially given gentrification concerns? Home ownership does not insulate you against displacement, because as property values rise, property taxes do as well, driving out renters and homeowners alike on fixed or lower incomes. ArtPlace is developing “measures of value, which capture changes in rental and ownership values…” This reads like an invitation to gentrification, and contrary to the NEA’s aspirations for creative placemaking to support social cohesion and community attachment.</p>
<p>Second, most good secondary data series are not available at spatial scales corresponding to grantees’ target places. ArtPlace’s vibrancy exercise aspires to compare neighborhoods with other neighborhoods, but available data makes this task almost impossible to accomplish at highly localized scales. Some data points, like arts employment by industry, are available only down to the county level and only for more heavily populated counties because of suppression problems (and because they are lumped together with sports and media in some data sets). Good data on artists from the Census (Public Use Microdata Sample) and American Community Surveys, the only database that includes the self-employed and unemployed, can’t be broken down below PUMA (Public Use Microdata Areas) of 100,000 people that bear little relationship to real neighborhoods or city districts (see <em>Crossover</em>, where we mapped artists <a href="http://irvine.org/news-insights/publications/arts">using 2000 PUMS data for the Los Angeles and Bay Area metros</a>).</p>
<p>Plus, many creative placemaking efforts have ambitions to have an impact at multiple scales. Gadwa Nicodemus’s <a href="http://metrisarts.com/">pioneering research studies</a>, <em>How Artist Space Matters </em>and <em>How Art Spaces Matter II,</em> looked in hindsight at Artspace’s artist live/work and mixed use projects where the criteria for success varied widely between projects and for various stakeholders involved in each.  Artists, nonprofit arts organizations, and commercial enterprises (e.g. cafes) in the buildings variously hoped that the project would an impact on the regional arts community, neighborhood commercial activity and crime rates, and local property values. The research methods included surveys and interviews exploring whether the goals of the projects have been achieved in the experience of target users. Others involve complex secondary data manipulation to come up with indicators that are a good fit. Gadwa Nicodemus’s studies demonstrate how much work it is to document real impact along several dimensions, multiple spatial scales, and a long enough time periods to ensure a decent test. Her indicators, such as hedonic price indices to gauge area property value change, are sophisticated, but also very time- and skill-intensive to construct.</p>
<p>Third, even if you find data that address what you hope to achieve, they are unlikely be statistically significant at the scales you hope for. In our work with PUMS data from the 2000 Census, a very reliable 5% sample, we found we could not make reliable estimates of artist populations at anything near a neighborhood scale. To map the location of artists in Minneapolis, we had to carve the city into three segments based on PUMA lines, and even then, we were pushing the statistical reliability hard (<em><a href="http://www.hhh.umn.edu/centers/prie/pdf/artists_centers.pdf">Artists&#8217; Centers</a>,</em> Figure 3, p. 108).</p>
<p>Some researchers are beginning to use the American Community Survey, a 1% sample much smaller than the decennial Census PUMS 5%, to build local indicators, heedless of this statistical reliability challenge. ArtPlace, for instance, is proposing to use ACS data to capture workers in creative occupations at the Census Tract level. See the statistical appendix to<em> </em>Leveraging Investments in Creativity (LINC)&#8217;s <a href="http://www.lincnet.net/files/LINC%20Artist%20Data%20User%20Guide%202008.pdf"><em>Creative Communities Artist Data User Guide </em></a> for a detailed explanation of this problem. Adding the ACS up over five years, one way of improving reliability, is problematic if you are trying to show change over a short period of time, which the creative placemaking indicators presumably aspire to do.</p>
<p>Fourth, charting change over time successfully is a huge challenge. ArtPlace <a href="http://www.artplaceamerica.org/loi/">intends to</a> “assess the level of vibrancy of different areas within communities, and importantly, to measure changes in vibrancy over time in the communities where ArtPlace invests.” How can we expect projects that hope to change the culture, participation, physical environment and local economy to show anything in a period of one, two, three years? More ephemeral interventions may only have hard-to-measure impacts in the year that they happen, even if they catalyze spinoff activities, while the potentially clearer impact of brick-and-mortar projects may take years to materialize.</p>
<p>We know from our case studies and from decades of urban planning and design experience that changes in place take long periods of time. For example, Cleveland’s Gordon Square Arts District, <a href="http://metrisarts.com/wp-content/uploads/2012/06/CreativePlacemaking-Full-Report.pdf">a case study in </a><em><a href="http://metrisarts.com/wp-content/uploads/2012/06/CreativePlacemaking-Full-Report.pdf">Creative Placemaking</a>,</em> required at least five years for vision and conversations to translate into a feasibility study, another few years to build the streetscape and renovate the two existing shuttered theatres, and more to build the new one.</p>
<p>Because it’s unlikely that the data will be good enough to chart creative placemaking projects’ progress over time, we are likely to see indicators used in a very different and pernicious way – to compare places with each other in the current time period. But every creative placemaking initiative is very, very different from others, and their current rankings on these measures more apt to reflect long-time neighborhood evolution and particularities rather than the impact of their current activities. I can just see creative placemakers viewing such comparisons and throwing their hands up in the air, shouting, “but.. but…but, our circumstances are not comparable!”</p>
<p>One final indicator challenge. As far as I can tell, there are very few arts and cultural indicators included among the measures under consideration. Where is the mission of bringing diverse people together to celebrate, inspire, and be inspired? Shouldn’t creative placemaking advance the intrinsic values and impact of the arts? Heightened and broadened arts participation? Preserving cultural traditions? Better quality art offerings? Providing beauty, expression, and critical perspectives on our society? Are artists and arts organizations whose greatest talents lie in the arts world to be judged only on their impact outside of this core? Though arts participation is measurable, many of the these “intrinsic” outcomes are challenging data-wise, just as are many of the “instrumental’ outcomes given central place in current indicator efforts. WolfBrown now <a href="intrinsicimpact.org/">offers a website</a><em> </em>that aims to “change the conversation about the benefits of arts participation, disseminate up-to-date information on emerging practices in impact assessment, and encourage cultural organizations to embrace impact assessment as standard operating practice.”</p>
<p>&nbsp;</p>
<p><strong>The Political Dangers of Relying on Indicators</strong></p>
<p>I fear three kinds of negative political responses to reliance on poorly-defined and operationalized indicators.  First, it could be off-putting to grantees and would-be grantees, including mayors, arts organizations, community development organizations and the many other partners to these projects. It could be baffling, even angering, to be served up a book of cooked indicators with very little fit to one’s project and aspirations and to be asked to make sense out of them. The NEA’s recent RFP calls for the development of a user guide with some examples, which will help. Those who have expressed concern report hearing back something like “don’t worry about it – we’re not going to hold you to any particular performance on these. They are just informational for you.” Well, but then why invest in these indicators if they aren’t going to be used for evaluation after all?!</p>
<p>Second, creative placemaking grants create competitors, and that means they are generating losers as well as winners.  Some who aren’t funded the first time try again, and some are sanguine and grateful that they were prompted to make the effort and form a team. But some will give up. There are interesting parallels with place-based innovations in the 1990s. The Clinton administration’s post Cold War defense conversion initiatives included the Technology Reinvestment Project, in which regional consortia competed for funds to take local military technologies into the civilian realm. As Michael Oden, Greg Bischak and Chris Evans-Klock concluded in our 1995 Rutgers study (full report available from the authors on request), the TRP failed after just a few years because Members of Congress heard from too many disgruntled constituents. In contrast, the <a href="www.nist.gov/mep/">Manufacturing Extension Partnership</a>, begun in the same period and administered by NIST, has survived because after its first exploratory rounds, it partnered with state governments to amplify funding for technical assistance to defense contractors struggling with defense budget implosion everywhere. States, rather than projects, then competed, eager for the federal funds.</p>
<p>Third, and most troubling, funders may begin favoring grants to places that already look good on the indicators. Anne Gadwa Nicodemus raised this in her <a href="http://metrisarts.com/recent/#Placemaking2">GIA <em>Reader</em> article on creative placemaking</a> last spring. ArtPlace’s <a href="http://www.artplaceamerica.org/loi/">own funding criteria</a> suggest this: “ArtPlace will favor investments… and sees its role as providing venture funding in the form of grants, seeding entrepreneurial projects that lead through the arts and already enjoy strong local buy-in and <em>will occur at places already showing signs of momentum</em>….” Imagine how a proposal to convert an old school in a very low income and somewhat depopulated, minority neighborhood into an artist live/work, studio and performance and learning space would stack up against a proposal to add funding to a new outreach initiative in an area already colonized by young people from elsewhere in the same city. A funder might be tempted to fund the latter, where vibrancy is already indicated, over the other, where the payoff might be much greater but farther down the road.</p>
<p>&nbsp;</p>
<p><strong>In an Ideal World, Sophisticated Models</strong></p>
<p>In any particular place, changes in the proposed indicators will not be attributable to the creative placemaking intervention alone. So imagine the distress of a fundee whose indicators are moving the wrong way and which place them poorly in comparison to others. Area property values may be falling because an environmentally obnoxious plant starts up. Other projects might look great on indicators not because of their initiatives, but because another intervention, like a new light rail system or a new community-based school dramatically changes the neighborhood.</p>
<p>What we’d would love to have, but don’t at this point, are sophisticated causal models of creative placemaking. The models would identify the multiple actors in the target place and take into account the results of their separate actions. A funded creative placemaking project team would be just one such “actor” among several (e.g. real estate developers, private sector employers, resident associations, community development nonprofits and so on).</p>
<p>A good model would account for other non-arts forces at work that will interact with the various actors’ initiatives and choices. This is crucial, and the logic models proposed by Moss, Zabel and others don’t do it. Scholars of urban planning well know how tricky it is to isolate the impact of a particular intervention when there are so many others occurring simultaneously (crime prevention, community development, social services, infrastructure investments like light rail or street repaving).</p>
<p>Furthermore, models should be longitudinal, i.e. they will chart progress in the particular place over time, rather than comparing one place cross-sectionally with others that are quite unlikely to share the same actors, features and circumstances. If we create models that are causal, acknowledge other forces at work, and are applied over time, “we’ll be able to clearly document the critical power of arts and culture in healthy community development,” reflects Deborah Cullinan of San Francisco’s Intersection for the Arts in a followup to our NAMAC panel.</p>
<p>Such multivariate models, as social scientists and urban planners call them, lend themselves to careful tests of hypotheses about change. We can ask if a particular action, like the siting of an interstate highway interchange or adding a prison or being funded in a federal program like the Appalachian Regional Commission, produces more employment or higher incomes or better quality of life for its host city or neighborhood when compared with twin or comparable places, as Andrew Isserman and colleagues have done in their “quasi-experimental” work (write me for a summary of these, soon to be published).</p>
<p>We can also run tests to see if differentials in city and regional arts participation rates and presence of arts organizations can be explained by differences in funding, demographics, or features of local economies. My teammates and I used Cultural Data Project and National Center for Charitable Statistics data on nonprofit arts organizations in California <a href="http://irvine.org/news-insights/publications/arts/arts-ecology-reports">to do this for all California cities with more than 20,000 residents</a>. Our results, while cross-sectional, suggest that concerted arts and culture-building by local Californians over time leads to higher arts participation rates and more arts offerings than can be explained by other factors. The point is that techniques like these DO take into account other forces (positive and negative) operating in the place where creative placemaking unfolds.</p>
<p>&nbsp;</p>
<p><strong>Charting a Better Path</strong></p>
<p>It’s understandable why the NEA and ArtPlace are turning to indicators. Their budgets for creative placemaking are relatively small, and they’d prefer to spend them on more programming and more places rather than on expensive, careful evaluations.  Nevertheless, designing indicators unrelated to specific funded projects seems a poor way forward. Here are some alternatives.</p>
<p><strong>Commit to real evaluation.</strong> This need not be as expensive as it seems. Imagine if the NEA and ArtPlace, instead of contracting to produce one-size-fits-all indicators, were to design a three-stage evaluation process.  Grantees propose staged criteria for success and reflect on them at specified junctures. Funding is awarded on the basis of the appropriateness of this evaluative process and continued on receipt of reflections. Funders use these to give feedback to the grantee and retool their expectations if necessary, and to summarize and redesign overall creative placemaking achievements. This is more or less what many philanthropic foundations do currently and have for many years, the NEA included. Better learning is apt to emerge from this process than from a set of indicator tables and graphics.  ArtPlace is well-positioned to draw on the expertise of its member foundations in this regard.</p>
<p><strong>Build cooperation among grantees to soften the edge of competition for funds.</strong> Convene grantees and would-be grantees annually to talk about success, failures, and problems. Ask successful grantees to share their experience and expertise with others who wish to try similar projects elsewhere. During Leveraging Investments in Creativity’s ten-year lifespan, it convened its creative community leaders annually and sometimes more often, resulting in tremendous cross-fertilization that boosted success. Often, what was working elsewhere turned out to be a better mission or process than what a local group had planned. Again, ArtPlace in particular could create a forum for this kind of cooperative learning. And, as mentioned, NEA’s webinars are a step in the right direction. Imagine, notes my NAMAC co-panelist Deborah Cullinan of Intersection for the Arts, if creative placemaking funders invested in cohort learning over time, with enough longevity to build relationships, share lessons, and nurture collaborations.</p>
<p>Finally, the National Endowment for the Arts and ArtPlace could <strong>provide technical assistance to creative placemaking grantees</strong>, as the <a href="www.nist.gov/mep/">Manufacturing Extension Partnership</a><em> </em>does for small manufacturers. Anne Gadwa Nicodemus and I continually receive phone calls from people across the country psyched to start projects but needy of information and skills on multiple fronts. There are leaders in other communities, and consultants, too, who know how creative placemaking works under diverse circumstances and who can form a loose consortium of talent: people who understand the political framework, the financial challenges, and the way to build partnerships. Artspace Projects, for instance, has recently converted over a quarter century of experience with more than two -dozen completed artist and arts-serving projects into a consultancy to help people in more places craft arts-based placemaking projects.</p>
<p>Wouldn’t it be wonderful if, in a few years’ time, we could say, look!  Here is the body of learning and insights we’ve compiled about creative placemaking&#8211;how to do it well, where the diverse impacts are, and how they can be documented. With indicators dominating the evaluation process at present, we are unlikely to learn what we could from these young experiments. An indicators-preoccupied evaluation process is likely to leave us disappointed, with spreadsheets and charts made quickly obsolete by changing definitions and data collection procedures. Let’s think through outcomes in a more grounded, holistic way. Let’s continue, and broaden, the conversation!</p>
<p><em>(The author would like to thank Anne Gadwa Nicodemus, Deborah Cullinan, Ian David Moss, and Jackie Hasa for thorough reads and responses to earlier drafts of this article.)</em></p>
<p><strong>Note to readers:</strong> In addition to the comments below, the National Endowment for the Arts and ArtPlace have now published official responses to this article. Read them here:</p>
<ul>
<li>Jason Schupbach and Sunil Iyengar, <a href="https://createquity.com/2012/11/our-view-of-creative-placemaking-two-years-in.html">Our View of Creative Placemaking, Two Years In</a></li>
<li>Carol Coletta and Joe Cortright, <a href="http://www.artplaceamerica.org/articles/understanding-creative-placemaking/">Understanding Creative Placemaking</a></li>
</ul>
]]></content:encoded>
			<wfw:commentRss>https://createquity.com/2012/11/fuzzy-concepts-proxy-data-why-indicators-wont-track-creative-placemaking-success/feed/</wfw:commentRss>
		<slash:comments>11</slash:comments>
		</item>
		<item>
		<title>Science Doesn&#8217;t Have All the Answers: Should We Be Worried?</title>
		<link>https://createquity.com/2012/11/science-doesnt-have-all-the-answers-should-we-be-worried/</link>
		<comments>https://createquity.com/2012/11/science-doesnt-have-all-the-answers-should-we-be-worried/#comments</comments>
		<pubDate>Thu, 08 Nov 2012 14:11:02 +0000</pubDate>
		<dc:creator><![CDATA[Talia Gibas]]></dc:creator>
				<category><![CDATA[Research]]></category>
		<category><![CDATA[confirmation bias]]></category>
		<category><![CDATA[Createquity Fellowship]]></category>
		<category><![CDATA[evaluation]]></category>
		<category><![CDATA[impact assessment]]></category>
		<category><![CDATA[measurement in the arts]]></category>
		<category><![CDATA[research design]]></category>

		<guid isPermaLink="false">https://createquity.com/?p=4071</guid>
		<description><![CDATA[On October 1 the science section of the New York Times ran two articles next to each other. One of them describes a recent study that concluded young children at play display behaviors similar to those of scientists, suggesting scientific inquiry is driven by human instinct. The other refers to the alarming extent to which<a href="https://createquity.com/2012/11/science-doesnt-have-all-the-answers-should-we-be-worried/" class="read-more">Read&#160;More</a>]]></description>
				<content:encoded><![CDATA[<div style="width: 510px" class="wp-caption aligncenter"><a href="http://www.flickr.com/photos/chachlate/5690684773/"><img loading="lazy" decoding="async" title="Double-blind study" src="http://farm6.staticflickr.com/5184/5690684773_33660aa857.jpg" alt="Double-blind study" width="500" height="492" /></a><p class="wp-caption-text">&#8220;a double-blind study,&#8221; photograph by Casey Holford</p></div>
<p>On October 1 the science section of the New York <em>Times</em> ran two articles next to each other. <a href="http://www.nytimes.com/2012/10/02/science/scientific-inquiry-among-the-preschool-set.html?_r=0">One of them</a> describes a <a href="http://www.sciencemag.org/content/337/6102/1623.abstract">recent study</a> that concluded young children at play display behaviors similar to those of scientists, suggesting scientific inquiry is driven by human instinct. The <a href="http://www.nytimes.com/2012/10/02/science/study-finds-fraud-is-widespread-in-retracted-scientific-papers.html?_r=2">other</a> refers to the alarming extent to which that human instinct muddies scientific inquiry along the way.</p>
<p>Recently the scientific community has dealt with controversies cascading across many areas of research.  Most of them relate to a phenomenon known as <a href="http://en.wikipedia.org/wiki/Publication_bias">publication bias</a>.  Put simply, publication bias occurs when research journals prioritize studies with thought-provoking—and at the very least statistically significant—results. This makes sense; it’s hard to get excited about studies that don’t show anything conclusive. We crave good stories, stunning breakthroughs, and world-changing discoveries. Such desire has driven scientific (and artistic) innovation throughout history.</p>
<p>The dark underbelly of this lust for meaning, however, is something called “significance chasing.” Researchers know their chances of getting published – and advancing their professional status – hinge on getting statistically significant results.  They have a huge incentive to hunt for and read into anomalies in data – raising the possibility of over-interpreting those anomalies as due to something other than chance. An <a href="http://www.geography.unt.edu/~rice/geog5190/5190handouts/falsepositives.pdf">article in the journal </a><em><a href="http://www.geography.unt.edu/~rice/geog5190/5190handouts/falsepositives.pdf">Psychological Science</a> </em>illustrates this point eerily well.  As the authors point out,</p>
<blockquote><p>It is common (and accepted practice) for researchers to explore various analytic alternatives, to search for a combination that yields ‘statistical significance,’ and then to report only what ‘worked’… This exploratory behavior is not the by-product of malicious intent, but rather the result of two factors: (a) ambiguity in how best to make these decisions and (b) the researcher’s desire to find a statistically significant result.</p></blockquote>
<p>To compound the problem, many researchers do not openly share their full data sets or calculation methods, and have few incentives to challenge one another’s findings.  The <em>Psychological Science</em> article hammers the former point home with a simulated experiment that “shows” listening to a Beatles song makes you older.  That’s hooey, of course, but the authors’ point is that without stricter guidelines around how data sets are reported, nearly any relationship can be presented as statistically significant.</p>
<p>How big of a problem is this? In the medical community it has raised frightening <a href="http://www.reuters.com/article/2012/03/28/us-science-cancer-idUSBRE82R12P20120328">questions about cancer studies</a> that had been the basis for new treatments. It has caused <a href="http://www.nature.com/embor/journal/v9/n1/full/7401143.html">an increase in the number of retractions</a> issued in high-profile scientific journals – and a <a href="http://retractionwatch.wordpress.com/">blog devoted to tracking them</a>. And lest you think this concern is limited to the “hard” sciences, think again – it has already raised discussions of implications in <a href="http://aidontheedge.info/2012/05/29/reflections-on-bias-and-complexity/">humanitarian aid</a> and in the <a href="http://www.forbes.com/sites/freekvermeulen/2012/01/06/publication-bias-or-why-you-cant-trust-any-of-the-research-you-read/">more mainstream business community</a> (the latter summing things up nicely with a headline, “Why You Can’t Trust Any of the Research You Read”).</p>
<p>Yikes.</p>
<p>The idea that the scientific method is easily mucked up opens up a whole host of mind-bending questions. (What if there’s a publication bias toward studies about publication bias?  Eeek…). It forces us to stop and think about the fledgling world of arts research – a world that has desperately wanted to find good, hard scientific evidence of impact for a long time. Randomized controlled trials, double-blind studies and other sophisticated research methods seemed like a holy grail, promising that if we could cleverly adapt them to meet our needs, we would have indisputable evidence of the importance of the arts, and good, hard data to guide how we direct our resources. In light of these controversies, should we question our desire to be better researchers?</p>
<p>No – but we should learn from others’ mistakes, and take a hard look at institutional issues common across our fields. Many of the problems the scientific community is experiencing aren’t about the tools scientists have at their disposal, but the cultures in which those tools are used. A few months ago the editors of two high-profile medical journals, Drs. Ferric Fang and Arturo Casadevall, <a href="http://iai.asm.org/content/80/3/897.full">put out a call for “structural reforms”</a> to combat a “hypercompetitive” and “insecure” working environment they believe to be the heart of the issue. The structural flaws they identify include inadequate resources, a “leaky pipeline” of emerging talent, agenda-driven funding and administrative bloat.</p>
<p>Sound familiar?</p>
<p>The long-term implications on all research communities will unfold over time. Many of Fang and Casadevall’s recommendations are similar to those made within our own field: directing more funding toward salary support to increase job stability, streamlining grant application and reporting processes, and examining the strengths and weaknesses of peer grant review. A number of other ideas have been floated that may change established research practices. Creating a “<a href="http://blog.givewell.org/2012/06/11/meta-research/">journal of good questions”</a> that decides which studies to publish before their results are known would reward researchers for their curiosity and the strength of their proposed methodology. <a href="http://www.geography.unt.edu/~rice/geog5190/5190handouts/falsepositives.pdf">Limiting the “degrees of freedom”</a> researchers have in gathering additional data if their original data set does not yield anything “interesting” would limit significance chasing and, in theory, create a culture more tolerant of inconclusive results.</p>
<p>Regardless of which, if any, of these ideas stick, we need to acknowledge two things: a) our research is in all likelihood as prone, if not more prone, to these problems as the “hard sciences,” and b) the “best practices” we have been trying to emulate are not “fixed practices.” It’s often said that what arts researchers seek to measure is too squishy to fit into the traditional scientific process. If more and more people are realizing the process has a squish of its own – well then, maybe we don’t need to play “catch up” so much as try new things.</p>
<p>We may even come up with ideas useful to the more “established” fields we have been trying to emulate. The authors of the study in the first (less depressing) New York <em>Times</em> article concluded the preschoolers they observed behaved like scientists because they “form[ed] hypotheses, [ran] experiments, calculat[ed] probabilities and decipher[ed] causal relationships about the world.” I suspect that a group of arts researchers, observing the same group of children, would have interpreted those same behaviors as artistic. Human instinct drives scientific inquiry and artistic inquiry, and muddies both. Artists, one could argue, are a little more used to the mud.</p>
]]></content:encoded>
			<wfw:commentRss>https://createquity.com/2012/11/science-doesnt-have-all-the-answers-should-we-be-worried/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>Around the horn: Highly Efffective edition</title>
		<link>https://createquity.com/2012/07/around-the-horn-highly-efffective-edition/</link>
		<comments>https://createquity.com/2012/07/around-the-horn-highly-efffective-edition/#respond</comments>
		<pubDate>Thu, 19 Jul 2012 12:55:32 +0000</pubDate>
		<dc:creator><![CDATA[Ian David Moss]]></dc:creator>
				<category><![CDATA[Economy]]></category>
		<category><![CDATA[Philanthropy]]></category>
		<category><![CDATA[Policy & Advocacy]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[Adam Huttler]]></category>
		<category><![CDATA[around the horn]]></category>
		<category><![CDATA[business models]]></category>
		<category><![CDATA[copyright]]></category>
		<category><![CDATA[cost disease]]></category>
		<category><![CDATA[cultural facilities]]></category>
		<category><![CDATA[cultural palaces]]></category>
		<category><![CDATA[evaluation]]></category>
		<category><![CDATA[GiveWell]]></category>
		<category><![CDATA[impact assessment]]></category>
		<category><![CDATA[Irvine Foundation]]></category>
		<category><![CDATA[measurement in the arts]]></category>
		<category><![CDATA[Michael Kaiser]]></category>
		<category><![CDATA[NEA]]></category>

		<guid isPermaLink="false">https://createquity.com/?p=3737</guid>
		<description><![CDATA[IN THE FIELD RIP Artnet Magazine; more here. I will always be grateful to Artnet&#8217;s Ben Davis for being just about the only arts journalist worth his salt during the whole Yosi Sergant debacle. Congratulations to GiveWell, which has announced a not-quite-merger with Good Ventures, an emerging foundation led by Cari Tuna and Dustin Moskovitz (the latter is one of the<a href="https://createquity.com/2012/07/around-the-horn-highly-efffective-edition/" class="read-more">Read&#160;More</a>]]></description>
				<content:encoded><![CDATA[<p><strong>IN THE FIELD</strong></p>
<ul>
<li>RIP <a href="http://artsbeat.blogs.nytimes.com/2012/06/25/artnet-chief-steps-down/">Artnet Magazine</a>; more <a href="http://galleristny.com/2012/06/artnet-magazine-will-cease-publication/">here</a>. I will always be grateful to Artnet&#8217;s Ben Davis for being just about the only arts journalist <a href="http://www.artnet.com/magazineus/reviews/davis/questions-for-patrick-courrielche10-10-09.asp">worth his salt</a> during the whole Yosi Sergant debacle.</li>
<li>Congratulations to GiveWell, which has <a href="http://blog.givewell.org/2012/06/28/givewell-and-good-ventures/">announced a not-quite-merger</a> with Good Ventures, an emerging foundation led by Cari Tuna and Dustin Moskovitz (the latter is one of the founders of Facebook). The blog post is a bit thin on details, but it sounds like this arrangement will ensure GiveWell&#8217;s financial security for some time to come while substantially enhancing its real-world impact.</li>
<li>Indiana University is set to open the country&#8217;s first <a href="http://www.thenonprofittimes.com/article/detail/iu-board-approves-school-of-philanthropy-4704">School of Philanthropy</a> later this year. It&#8217;s early, of course, but these snippets from the article suggest to me that buyer beware: &#8220;As with any academic setting, funding is an issue&#8230;.With the nonprofit sector roughly 5 percent of the nation’s gross domestic product and 10 percent of the workforce, such [a] school could be a profit-center for the university, Rooney said.&#8221;</li>
<li>One of the NEA&#8217;s lesser known programs, the Citizens&#8217; Institute on Rural Design, will now be <a href="http://www.arts.gov/news/news12/CIRD.html">a partnership</a> between the NEA, the Department of Agriculture, Project for Public Spaces, the Orton Family Foundation, and CommunityMatters. CIRD facilitates and hosts workshops on community design in places with fewer than 50,000 people.</li>
</ul>
<p><strong>BIG IDEAS</strong></p>
<ul>
<li>Michael Kaiser has a penchant for inciting digital controversy, and his recent <a href="http://www.huffingtonpost.com/michael-kaiser/the-new-model-part-1_b_1605217.html">two</a>&#8211;<a href="http://www.huffingtonpost.com/michael-kaiser/the-new-model-part-2_b_1623893.html">part</a> post calling bullshit on &#8220;new business models&#8221; was no exception. At the core of the debate is this central question: how much is the nonprofit arts sector going to change in the next 50 years? Kaiser says not so much; Adam Huttler, on the other hand, thinks <a href="http://www.fracturedatlas.org/site/blog/2012/06/19/swimming-downstream-in-the-current-of-history/">quite a lot</a>. Huttler&#8217;s <a href="http://www.fracturedatlas.org/site/blog/2012/06/29/new-models-redux/">second post</a> on the subject, in particular, is one of his most thought-provoking and brilliant in quite some time. EmcArts&#8217;s <a href="http://artsfwd.org/richard-evans-on-appreciating-new-frameworks-for-the-arts/">Richard Evans</a> and Sarah Lutman also weighed in.</li>
<li>Whither the future of open data and philanthropy? The Knight Foundation is currently considering a proposal to <a href="http://philanthropy.blogspot.com/2012/06/opening-990-data.html">digitize 10 years of IRS 990 nonprofit data</a> and make it available to the public for free. GiveWell&#8217;s Alexander Berger, writing on his personal blog, argues that this presents a clear opportunity to GuideStar&#8217;s next president to <a href="http://marginalchange.blogspot.com/2012/06/disruption-in-nonprofit-sector-or-why.html">reform its business model</a> around open data. (GuideStar&#8217;s current president, Bob Ottenhoff responds in the comments.) And the Foundation Center&#8217;s Brad Smith makes a <a href="http://pndblog.typepad.com/pndblog/2012/07/philanthropys-data-dilemma.html">passionate case</a> for data standards and greater transparency among foundations.</li>
<li>We&#8217;ve now entered an era in which college-age students have <a href="http://www.npr.org/blogs/allsongs/2012/06/16/154863819/i-never-owned-any-music-to-begin-with">never known what it&#8217;s like</a> to have to pay for music. <a href="http://futureofmusic.org/blog/2012/06/19/bridging-gap-between-musicians-and-fans">Casey Rae</a> and <a href="http://parabasis.typepad.com/blog/2012/06/why-we-cant-have-nice-things.html">J. Holtham</a> have more.</li>
<li><a href="http://www.technologyinthearts.org/2012/06/cultural-preservation-future-concerns-trends-and-hypotheses/">What is the future of museums</a>?</li>
</ul>
<p><strong>BIG MONEY</strong></p>
<ul>
<li>The Irvine Foundation has announced its <a href="http://www.irvine.org/news-insights/entry/our-new-arts-strategys-first-grants">first set of grants</a> under its new arts strategy that emphasizes audience engagement.</li>
<li>Jon Silpayamanant makes the interesting point that <a href="http://www.insidethearts.com/buttsintheseats/2012/06/19/embracing-the-cost-disease/">sports teams have a performance income gap</a> (i.e., expenses that outpace ticket revenue) just like symphony orchestras do.</li>
<li>Wait, nonprofits are <a href="http://influencealley.nationaljournal.com/2012/06/koch-brothers-cato-to-settle-c.php">allowed to have shareholders</a>?<br />
<blockquote><p>The deal will settle a lawsuit the Koch brothers filed in February over shares that determine control of Cato. It results from the original division of shares between the two Koch brothers, Crane and late Cato Chairman William Niskanen. After Niskanen died of stroke complications in October, the Koch brothers claimed a founding shareholder agreement gave them the option to buy Niskanen&#8217;s shares. Crane held they should go to Niskanen&#8217;s widow, which would leave him in effective control of the organization.</p>
<p>The settlement involves dissolving the shareholder agreement. In addition, Crane is expected to retire under an agreement that allows him to select his successor, though the Koch brothers could veto the hiring.</p></blockquote>
</li>
</ul>
<p><strong>RESEARCH (AND EVALUATION) CORNER</strong></p>
<ul>
<li>FSG&#8217;s Valerie Bockstette points out the dangers of <a href="http://www.fsg.org/KnowledgeExchange/Blogs/StrategicEvaluation/PostID/307.aspx">measuring what&#8217;s easy to measure</a> instead of what&#8217;s most important.</li>
<li>The Colorado Health Foundation&#8217;s Anne Warhover describes <a href="http://www.effectivephilanthropy.org/blog/2012/06/how-evaluation-measures-up-a-ceos-perspective/">her organization&#8217;s approach to impact assessment</a>.</li>
<li>If you thought the theory of change and measurement framework for ArtsWave was ambitious, just take a look at this new <a href="http://www.theatlanticcities.com/jobs-and-economy/2012/06/how-measure-community-sustainability/2339/">comprehensive sustainability plan for Rockford, IL</a>, which intends to measure economic, social, and environmental outcomes in 16 categories including cultural life and the built environment. The transportation category alone tracks 43 indicators.</li>
<li>Kudos to the Cultural Policy Center at the University of Chicago for the most <a href="http://news.uchicago.edu/article/2012/06/28/careful-planning-and-focus-audience-crucial-success-new-cultural-facilities">blockbuster release</a> of an arts research study so far this year. Called &#8220;<a href="http://culturalpolicy.uchicago.edu/setinstone/">Set in Stone: Building America&#8217;s New Generation of Arts Facilities 1994-2008</a>,&#8221; the report takes a critical look at the billions of dollars thrown by arts institutions at new buildings, museum wings, expansions, renovations, etc. during the decade and a half in question. Authored by then-grad-student Joanna Woronkowicz (as her <a href="http://udini.proquest.com/view/cultural-infrastructure-in-the-pqid:2551992801/">dissertation</a>), Carrol Joynes, and about a half dozen others, &#8220;Set in Stone&#8221; argues that much of that building boom was of questionable wisdom. The report is available in full multimedia regalia, even including an <a href="http://news.uchicago.edu/article/2012/06/28/careful-planning-and-focus-audience-crucial-success-new-cultural-facilities">animated video</a>, and scored a <a href="http://www.nytimes.com/2012/06/28/arts/design/study-shows-expansion-can-be-unhealthy-for-arts-groups.html?ref=arts&amp;pagewanted=all">feature in the New York <em>Times</em></a>, along with reactions from <a href="http://www.theatlanticcities.com/arts-and-lifestyle/2012/07/we-built-way-too-many-cultural-institutions-during-good-years/2456/">The Atlantic Cities</a>, <a href="http://philanthropy.blogspot.com/2012/06/influence-of-evaluation-and-evaluating.html">Lucy Bernholz</a>, the <a href="http://nonprofitfinancefund.org/blog/edifice-complex">Nonprofit Finance Fund</a>, and <a href="http://www.arts.gov/artworks/">Sunil Iyengar</a> (now Woronkowicz&#8217;s boss at the NEA&#8217;s Office of Research and Analysis). Elizabeth Quaglieri has a <a href="http://www.technologyinthearts.org/2012/07/are-bricks-and-mortar-the-best-use-for-money-in-the-arts-the-overbuild-of-cultural-facilities-in-the-united-states">helpful summary</a> over at Technology in the Arts. Congratulations, Chicago, you sure know how to get our attention!</li>
</ul>
<p><strong>ETC.</strong></p>
<ul>
<li>Umm, please apply for the Createquity Writing Fellowship, <a href="http://blog.artsusa.org/2012/06/19/giving-thanks-in-americas-capital/">Delali Ayivor</a>?</li>
</ul>
]]></content:encoded>
			<wfw:commentRss>https://createquity.com/2012/07/around-the-horn-highly-efffective-edition/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>In Defense of Logic Models</title>
		<link>https://createquity.com/2012/06/in-defense-of-logic-models/</link>
		<comments>https://createquity.com/2012/06/in-defense-of-logic-models/#comments</comments>
		<pubDate>Thu, 28 Jun 2012 12:50:18 +0000</pubDate>
		<dc:creator><![CDATA[Ian David Moss]]></dc:creator>
				<category><![CDATA[Philanthropy]]></category>
		<category><![CDATA[Policy & Advocacy]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[ArtPlace]]></category>
		<category><![CDATA[collective impact]]></category>
		<category><![CDATA[creative placemaking]]></category>
		<category><![CDATA[evaluation]]></category>
		<category><![CDATA[impact assessment]]></category>
		<category><![CDATA[logic models]]></category>
		<category><![CDATA[measurement in the arts]]></category>

		<guid isPermaLink="false">https://createquity.com/?p=3634</guid>
		<description><![CDATA[They're flexible, they're transparent, and chances are, they're already in your head.]]></description>
				<content:encoded><![CDATA[<p><a href="https://createquity.com/wp-content/uploads/2012/06/Logic1.jpg"><img loading="lazy" decoding="async" class="aligncenter wp-image-3642" src="https://createquity.com/wp-content/uploads/2012/06/Logic1.jpg" alt="Photo by 707d3k" width="560" height="277" srcset="https://createquity.com/wp-content/uploads/2012/06/Logic1.jpg 1024w, https://createquity.com/wp-content/uploads/2012/06/Logic1-300x148.jpg 300w" sizes="auto, (max-width: 560px) 100vw, 560px" /></a>Last month, my post <a href="https://createquity.com/2012/05/creative-placemaking-has-an-outcomes-problem.html">Creative Placemaking Has an Outcomes Problem</a> generated a lot of discussion about creative placemaking and grantmaking strategy, much of it really great. If you haven’t had a chance, please check out these thoughtful and substantive responses by—just to name a few—<a href="https://createquity.com/2012/05/creative-placemaking-has-an-outcomes-problem.html#comment-8109">Richard Layman</a>, <a href="https://createquity.com/2012/05/creative-placemaking-has-an-outcomes-problem.html#comment-8113">Niels Strandskov</a>, <a href="https://createquity.com/2012/05/creative-placemaking-has-an-outcomes-problem.html#comment-8124">Seth Beattie</a>, <a href="https://createquity.com/2012/05/creative-placemaking-has-an-outcomes-problem.html#comment-8713">Lance Olson</a>, <a href="http://www.artsjournal.com/artfulmanager/main/never-mind-the-outcome-behind-the-curtain.php">Andrew Taylor</a>, <a href="http://www.artsjournal.com/jumper/2012/05/funder-knows-best/">Diane Ragsdale</a>, <a href="http://www.tcdailyplanet.net/arts/lifestyle/column/arts-orbit/creative-placemaking">Laura Zabel</a>, and most recently, <a href="http://www.artplaceamerica.org/understanding-creative-placemaking/">ArtPlace itself</a>. I’m immensely grateful for the seriousness with which these and other readers have taken my critique, and their questions and suggestions for further reading have been tremendously illuminating.</p>
<p>Now that the talk has subsided a bit, it’s a good opportunity to clarify and elaborate upon some of the subtler points that I was trying to make in that piece, which definitely left a bit of room for people to read in their own interpretations.  So, just to be clear: despite the provocative title, I wasn’t trying to slam the practice of creative placemaking itself, nor call it into question as a focus area for policy and philanthropy in general. As I wrote in response to one of the comments on the original post, I believe strongly in the power of the arts to have a role in revitalizing communities, and I view the desire to direct resources toward bringing such efforts to life as a very positive impulse on the part of funders and policymakers. Furthermore, although I agree with the point made by <a href="https://createquity.com/2012/05/creative-placemaking-has-an-outcomes-problem.html#comment-8257">John Shibley</a> and others that the arts may not be the <em>best </em>way to foment economic development, no one said that cities and regions can only use one strategy. Economic development is a complex beast, and intuition and common sense would hold that there are most likely some specific situations in which the arts can have a real, irreplaceable, and catalytic impact.</p>
<p>My critique is really about how we don’t have much information about <em>what those situations are </em>– nor about how infusions of philanthropic capital <em>can make a difference in those situations</em>. What’s more, I am not confident that the tools we’re currently developing, as useful as they may be for other purposes, will get us there on their own. My contention is that logic models and their conceptual cousins, theories of change, can be useful tools in filling this gap – by forcing us to articulate our assumptions about the way the world works, and by providing a framework that we can use to test those assumptions. The problem is that most of the logic models that I see aren’t worked out to the level of detail that I believe is necessary to gain really useful information about the dynamics of these complex processes. In my post, I provided a couple of examples of theories that, while surely far from perfect, at least attempt to recognize some of the numerous and interlocking assumptions embedded in grantmaking of the kind engaged in by today’s funders supporting creative placemaking.</p>
<p>It’s clear from some of the responses, however, that not everybody shares my optimism in the utility of logic models. Laura Zabel <a href="http://www.tcdailyplanet.net/arts/lifestyle/column/arts-orbit/creative-placemaking">writes that she “hates” them</a> for being too reductive. Diane Ragsdale, taking a cynical view, is worried that they <a href="http://www.artsjournal.com/jumper/2012/05/funder-knows-best/">may be misused</a> by funders in order to make themselves seem smarter than they really are or blame grantees for failed strategies. <a href="http://www.artplaceamerica.org/understanding-creative-placemaking/">ArtPlace’s response</a> suggests that logic models raise the bar for research too high, and that because proving a causal connection between these investments and the change they produce (or don’t) is so difficult, we’re better off not trying. While I can sympathize with each of these critiques, I also think that they give logic models a bad rap. I feel that logic models are a tool of tremendous power whose potential is only beginning to be unlocked. It’s true that, just like philanthropy and policy, logic models can be done very badly. But that doesn’t mean there’s no gain for us in trying to do them well.</p>
<p>Before we get into all that, however, I’m guessing that some of you probably could use a refresher course on logic models and the terminology associated with them, which can be quite confusing. So let’s start with a little background on what this is all about.</p>
<p><strong>What the Hell <em>Is</em> a Logic Model, Anyway?</strong></p>
<p>Simply put, a logic model is a method of describing and visualizing a strategy. Logic models have their conceptual origin in the “<a href="http://en.wikipedia.org/wiki/Logical_framework_approach">logical framework approach</a>” originally developed for USAID in 1969 by Leon J. Rosenberg of Fry Consultants. Their use was largely concentrated in the international aid arena until 1996, when United Way of America published <a href="http://www.unitedwayslo.org/ComImpacFund/10/Excerpts_Outcomes.pdf">a manual on program outcome measurement</a> and encouraged its hundreds of local agencies and thousands of grantees to adopt logic models as a matter of course. Since then, large private funders such as the Kellogg and Hewlett Foundations have integrated logic models into their program design and execution, and the concept is commonly taught in graduate programs in public policy, urban planning, and beyond.</p>
<p>Even though logic models have achieved greater adoption over the past several decades, there is little standardization in the content, format, and level of ambition seen in professionally produced logic models for institutions large and small. Worse, everyone seems to want to come up with their own terms to describe features of the logic model, and as a result, you’ll notice a lot of variation in language as well. Below, I’ll do my best to isolate the elements that most of these efforts have in common.</p>
<p>Nearly all logic models contain the following fundamental elements. In combination, they describe a linear, causal pathway between programs or policy interventions and an aspirational end-state.</p>
<ul>
<li><strong>Activities</strong> are actions or strategies undertaken by the organization that is the subject of the logic model. These activities usually take place in the context of ongoing programs, although they can also be one-time projects, special initiatives, or policies such as legislation or regulation.</li>
<li><strong>Outputs </strong>refer to observable indications that the above activities are being implemented correctly and as designed.</li>
<li><strong>Outcomes </strong>are the desired short-, medium-, or long-term results of successful program or policy implementation.</li>
<li><strong>Impacts </strong>(or Goals) represent the highest purpose of the program, policy, or agency that is the subject of the logic model. Sometimes you’ll find these lumped in with Outcomes.</li>
</ul>
<div id="attachment_3638" style="width: 570px" class="wp-caption aligncenter"><a href="https://createquity.com/wp-content/uploads/2012/06/Bicycle-Helment-Campaign1.jpg"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-3638" class="wp-image-3638" src="https://createquity.com/wp-content/uploads/2012/06/Bicycle-Helment-Campaign1.jpg" alt="Logic model for a bicycle helmet public information campaign" width="560" height="409" srcset="https://createquity.com/wp-content/uploads/2012/06/Bicycle-Helment-Campaign1.jpg 997w, https://createquity.com/wp-content/uploads/2012/06/Bicycle-Helment-Campaign1-300x219.jpg 300w" sizes="auto, (max-width: 560px) 100vw, 560px" /></a><p id="caption-attachment-3638" class="wp-caption-text">Logic model for a bicycle helmet public information campaign, courtesy RUSH Project</p></div>
<p>Many realizations of logic models combine these essential elements with additional information that provides contextual background for this causal pathway. Several of these supplemental concepts are listed below in approximate order from most common to most obscure.</p>
<ul>
<li><strong>Measures </strong>(or Indicators) for outputs, outcomes, and impacts are concrete, usually quantitative data points that shed light on the degree to which each result has been achieved.</li>
<li><strong>Inputs </strong>are resources available to the program or organization in accomplishing its goals.</li>
<li><strong>Assumptions </strong>are preconditions upon which the model rests. If one or more assumptions proves unsound, the integrity of the model may be threatened.</li>
<li><strong>Benchmarks </strong>extend the concept of measures to incorporate specific target goals (so, not just “# of petitions delivered to Congress” but “50,000 petitions delivered to Congress”).</li>
<li><strong>Target Population </strong>refers to the audience(s) for the activities listed in the logic model.</li>
<li><strong>Influential Factors </strong>are variables or circumstances that exist in the broader environment and could affect the performance of the strategy as designed (e.g., an upcoming election cycle whose outcome might change the underlying landscape in which the program operates).</li>
</ul>
<p><strong>What About Theories of Change?</strong></p>
<p>A frequently-employed alternative logic model approach strips out this latter set of contextual elements and instead aims to visualize the linear causal chain at a finer grain of detail. This version of a logic model is typically referred to as a <a href="http://www.theoryofchange.org/"><strong>theory of change</strong></a> (or, sometimes, a program theory). A well-executed theory of change diagram “unpacks” the processes and factors that lead to successful outcomes, exposing relationships between isolated variables that can then become the subject of research or evaluation.</p>
<div id="attachment_3637" style="width: 625px" class="wp-caption aligncenter"><a href="http://www.theoryofchange.org/pdf/Superwomen_Example.pdf"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-3637" class="wp-image-3637 size-full" title="SuperWomen TOC example" src="https://createquity.com/wp-content/uploads/2012/06/SuperWomen-TOC-example1.png" alt="Partial theory of change from Project SuperWomen case study (ActKnowledge/Aspen Institute Roundtable on Community Change" width="615" height="390" srcset="https://createquity.com/wp-content/uploads/2012/06/SuperWomen-TOC-example1.png 615w, https://createquity.com/wp-content/uploads/2012/06/SuperWomen-TOC-example1-300x190.png 300w" sizes="auto, (max-width: 615px) 100vw, 615px" /></a><p id="caption-attachment-3637" class="wp-caption-text">Partial theory of change from Project SuperWomen case study (ActKnowledge/Aspen Institute Roundtable on Community Change)</p></div>
<p>Sometimes logic models and theories of change are <a href="http://evaluationtoolsforracialequity.org/evaluation/resource/doc/TOCs_and_Logic_Models_forAEA.ppt">presented as distinct concepts</a>, while other times they really refer to the same thing. This is because logic models and theories of change evolved out of distinct communities of practice, but the philanthropic field has not always respected the distinction in the terminology it’s adopted to describe these tools. In my own practice I prefer to use theories of change, but for the sake of simplicity and readability, in the rest of this article I’m going to use the term “logic model” inclusively to refer to any diagram that clearly shows some combination of activities and outcomes, regardless of what other elements it may include or the visual approach it takes.</p>
<p style="text-align: center;">*</p>
<p>OK, now that we have our definitions in order, we can start talking about what makes logic models so awesome.</p>
<p><strong>Awesome #1: Logic Models Describe What’s Already Going On in Your Head</strong></p>
<p>So, here’s the thing: the core questions involved in creating any logic model—What am I trying to do? Why am I trying to do it? How will I know if I’ve succeeded or not?—represent the very essence of strategy. As <a href="http://www.jewfaq.org/sages.htm">a rabbi</a> might say, “the rest is commentary.” <em>If you have a strong sense of what the answers to these questions are, then you have a logic model in your head whether you realize it or not.</em> All the diagram does is make it explicit.</p>
<p>To illustrate this, we can look at a simple example. Let’s say I decide I’m done with this whole “arts” thing and I want to go to law school instead. I know, though, that in order to get into a good law school I need to get a good score on the LSAT. So, how can I make sure I get a good score? Intuitively, I decide that taking a test prep class is the way to go.</p>
<p>Why do I think taking a test prep class is a way to increase my score on the LSAT? Well, if my score isn’t as high as it could be, it’s probably due to some combination of two factors. First, I may not know the material well enough. So, if the class helps me to learn how to answer the test questions better, I’ll likely perform better on the test. Second, there may be a psychological factor as well. If I’m someone who gets nervous on tests, then my performance on them may suffer. The practice exams and deep engagement with the material that comes with a class could help me to get more comfortable with the idea of the LSAT and make it seem less intimidating, thus improving my performance.</p>
<p>Seems logical enough, right? And voila, it lends itself quite easily to a logic model:</p>
<p><a href="https://createquity.com/wp-content/uploads/2012/06/Sample-program-theory1.png"><img loading="lazy" decoding="async" class="aligncenter wp-image-3636" src="https://createquity.com/wp-content/uploads/2012/06/Sample-program-theory1-1024x768.png" alt="Sample program theory" width="560" height="420" srcset="https://createquity.com/wp-content/uploads/2012/06/Sample-program-theory1-1024x768.png 1024w, https://createquity.com/wp-content/uploads/2012/06/Sample-program-theory1-300x225.png 300w, https://createquity.com/wp-content/uploads/2012/06/Sample-program-theory1.png 1365w" sizes="auto, (max-width: 560px) 100vw, 560px" /></a></p>
<p>The truth is that <em>any </em>decision you make, if it has any element of intentionality at all, can be diagrammed as a logic model. You might hate logic models with every fiber of your being and think they’re the stupidest thing ever created, but I’m telling you right now: <strong>if you believe in strategy, then you believe in logic models</strong>.</p>
<p><strong>Awesome #2: Logic Models Are Incredibly Flexible</strong></p>
<p>Now, there’s a difference between having a logic model in your head and having a <em>good</em> logic model in your head. The example above is simple, but it’s limited by that simplicity. It doesn’t explain why I might have decided to go to law school, or explore other ways that I could get into the school I want besides increasing my test score. In short, it’s pretty much just a straight-up mapping of a decision already made.</p>
<p>The best logic models don’t do that. Instead, they proceed with the end in mind (<em>what is the goal we want to achieve?</em>) and then methodically work <em>backwards </em>to understand what activities or strategies would be most appropriate to achieve that end. The ultimate outcome of this exercise may be a very different set of strategies than the ones you were originally contemplating or the programs you already have in place! Because of that, the logic model creation process can be great for opening up new ways of thinking about old problems or longstanding dreams, as well as clarifying what’s really important to you and/or your organization.</p>
<p>I mentioned earlier that not everyone is a fan of logic models. Here’s what Laura Zabel had to say about them in her post responding to mine:</p>
<blockquote><p>I hate logic models. For me they are, somehow, simultaneously too reductive and too complex. Too simple, too linear for how I think the world works and too dry, too chart-y for how beautiful the world is. They make me irrationally grumpy.</p></blockquote>
<p>Arlene Goldbard, in a 2010 essay, is <a href="http://arlenegoldbard.com/2010/05/27/924/">similarly grumpy</a> about logic models:</p>
<blockquote><p>[R]equiring one of these charts as part of a grant proposal bears about as much real relationship to community organizations’ work as would asking each to weave a placemat…[T]he task of boiling the answers down to colored bars often wastes days, compressing most of the useful meaning out of the inquiry.</p></blockquote>
<p>I can’t speak to Laura’s and Arlene’s experiences directly, but I know they are not uncommon. Unfortunately, logic models that are rife with imprecision, questionable assumptions, and inappropriate associations are more frustrating to work with than no logic model at all—and it’s not as easy as it looks to create logic models that are free of these flaws. Such problems are magnified when logic models are treated as edicts sent down from on high rather than the learning, living documents that they are intended to be.</p>
<p>In her post, Laura presents an alternative formulation of a logic model that describes her theory of change for creative placemaking: <strong>artists + love + authenticity -&gt; creative placemaking</strong>. While I’d classify this as more of a definition of creative placemaking than a logic model, it goes a long way toward illustrating my point that we all have latent logic models in our head that are just waiting to be expressed as such. Laura writes, “there’s no logic model in the world that can capture how a crazy parade [the <a href="http://www.hobt.org/mayday/">annual MayDay parade</a> in Minneapolis] can restore my faith in humanity.” I couldn&#8217;t disagree more – in fact, I made one, relying solely on the way Laura describes the parade in her post. Here you go:</p>
<p><a href="https://createquity.com/wp-content/uploads/2012/06/Laura-Zabels-Faith-in-Humanity1.png"><img loading="lazy" decoding="async" class="aligncenter wp-image-3635" src="https://createquity.com/wp-content/uploads/2012/06/Laura-Zabels-Faith-in-Humanity1.png" alt="Laura Zabel's Faith in Humanity" width="560" height="420" srcset="https://createquity.com/wp-content/uploads/2012/06/Laura-Zabels-Faith-in-Humanity1.png 960w, https://createquity.com/wp-content/uploads/2012/06/Laura-Zabels-Faith-in-Humanity1-300x225.png 300w" sizes="auto, (max-width: 560px) 100vw, 560px" /></a></p>
<p>For the past few months, I’ve been researching impact assessment methods used across the social sector in connection with some evaluation work we’re doing here at Fractured Atlas. Seemingly every year, someone comes up with a new way of evaluating impact, whether it’s for social purpose investing, choosing grants, or measuring externalities. I’m not done yet, but what I’ve found so far has only reinforced my appreciation for the logic model. The beauty of logic models is that, because they relate so directly to the fundamental elements of strategy, they are endlessly adaptable to almost any situation. I actually find it kind of funny when people call logic models too rigid, given the alternatives – especially considering how much of our lives is ruled by the granddaddy of rigid, one-dimensional success metrics: money.</p>
<p><strong>Awesome #3: Logic Models Are a Victory for Transparency</strong></p>
<div id="attachment_3641" style="width: 570px" class="wp-caption aligncenter"><a href="https://createquity.com/wp-content/uploads/2012/06/Hewlett-Strategic-Framework2.png"><img loading="lazy" decoding="async" aria-describedby="caption-attachment-3641" class="wp-image-3641" src="https://createquity.com/wp-content/uploads/2012/06/Hewlett-Strategic-Framework2.png" alt="Hewlett Foundation Performing Arts Program strategic framework" width="560" height="413" srcset="https://createquity.com/wp-content/uploads/2012/06/Hewlett-Strategic-Framework2.png 863w, https://createquity.com/wp-content/uploads/2012/06/Hewlett-Strategic-Framework2-300x221.png 300w" sizes="auto, (max-width: 560px) 100vw, 560px" /></a><p id="caption-attachment-3641" class="wp-caption-text">Hewlett Foundation Performing Arts Program strategic framework (co-developed in 2008 by yours truly)</p></div>
<p>One of the really powerful things about downloading the implicit strategy that exists in your head into a diagram is that it confronts you with gaps that may be present in that strategy and allows you to try and work through them. The Underpants Gnomes example that I used in my creative placemaking post is a great illustration of this. The Gnomes clearly felt that stealing underpants would lead to profit, but hadn’t clearly thought through the fuzzy middle part of the scheme. The Underpants Gnomes might seem like a fanciful exaggeration of the problem I’m talking about, but I would argue that there’s been more than one arts organization (and funding initiative) started with a similar lack of congruity between the proposed activities and intended results.</p>
<p>Diane Ragsdale is skeptical that logic models can serve this function, <a href="http://www.artsjournal.com/jumper/2012/05/funder-knows-best/">suggesting that funders might misuse logic models and turn them against their grantees</a>. First, I think it’s important to make a distinction here: Diane and Arlene and Laura are all talking about logic models at the level of the <em>individual organization/project.</em> While I think these can be helpful, in my mind the most important logic model is the one <em>for the funder itself</em>. This is admittedly a rarer practice, but several foundations – such as <a href="http://www.fordfoundation.org/issues/freedom-of-expression/supporting-diverse-arts-spaces/grant-making">Ford</a>, <a href="http://www.hewlett.org/uploads/files/PA_StratFramework.pdf">Hewlett</a>, <a href="http://mcknight.org/arts/McKArtsLogicModel.pdf">McKnight</a>, and <a href="http://www.tbf.org/uploadedFiles/tbforg/Grant_Seekers/TBF%20Strategic%20Frame%20Work.pdf">Boston</a> – have taken the steps of not only developing a logic model to describe their grantmaking strategies, but sharing that logic model publicly. Those that don&#8217;t publish at least sometimes make them available to peers outside the organization. Once a logic model is “out there,” there’s no taking it back. Therefore, logic models both pull back the curtain on a funder’s current thinking and also make it harder to project the illusion after the fact that a funder knew how things were going to work out all along.</p>
<p>Even more importantly, though, logic models are a victory for transparency with <em>oneself</em>, not just with others. The most important part of any logic model creation process is the set of <a href="http://www.missionparadox.com/the_mission_paradox_blog/2012/05/vital-assumptions.html">assumptions</a> revealed about how your program or organization works, and what it needs to be successful. Sometimes these assumptions might seem like no-brainers, and other times they will seem as unproven as they are central. Being comfortable with naming your assumptions as such is not just good practice for ensuring that your organization is constantly learning and growing. It’s also extremely helpful on a psychological level for dealing with the specter of possible failure. Because logic models explicitly draw a distinction between program design and program execution, they acknowledge the very real possibility that you could be doing your job <em>perfectly </em>and your program could still fail, because its theoretical foundation rests upon faulty assumptions. This is an incredibly freeing realization, because it means that radically changing or even scrapping a program that isn’t working doesn’t necessarily have to mean changing program leadership.</p>
<p>One of the reasons people sometimes feel anxious about evaluation and measurement is because they’re afraid of being held accountable, especially to things that they don’t have full control over or to metrics that don’t seem relevant to what they’re trying to do. When that happens, there are enormous incentives on managers and their supervisors to “cook the books” or otherwise game the system to show results that look better than reality, because any failure—even failures that are no one’s fault—reflects on them personally. That’s the danger of trying to enforce a data-driven culture without first developing the theoretical frameworks that determine what data you’re trying to collect. Because logic models separate the person from the program, they can distinguish between lagging initiatives that might just need more time to prove themselves, and failures of design that can be transformed into productive learning opportunities.</p>
<p><strong>A Note About Logic Models and “Proof”</strong></p>
<p>One of the criticisms directed at logic models generally, and by ArtPlace at my post specifically, is that they promote an impossible standard for proof. <a href="http://www.artplaceamerica.org/understanding-creative-placemaking/">Here’s what ArtPlace had to say about it</a>:</p>
<blockquote><p>A critical limitation of elaborate logic models of the style developed by Moss is that it is nearly impossible to quantify or measure all of the different factors and relationships proposed. While many of the asserted relationships are plausible…almost none are measured in practice. Many – if not most – of the variables…could only be gauged imprecisely at great expense or are not susceptible of measurement at all. Multiply this problem by the 50 variables the model uses and the dozens of relationships it asserts, and it’s clear that it is beyond anyone’s ability to actually prove or disprove the model for even a single metropolitan area, much less the nation.</p></blockquote>
<p>In one sense this criticism is right: these are difficult research challenges that highly qualified professionals have been struggling to address for decades, some of the more promising approaches demonstrated at the recent <a href="http://www.brookings.edu/events/2012/05/10-arts-development#ref-id=20120510_NEA_Panel_1">NEA/Brookings Institution convening on economic development and the arts</a> notwithstanding.</p>
<p>But contrary to popular perception, <a href="http://www.psychologytoday.com/blog/the-scientific-fundamentalist/200811/common-misconceptions-about-science-i-scientific-proof">the term “scientific proof” is a misnomer</a>: proof is a concept germane to mathematics, not science. (Admittedly, my previous post could have been clearer on this point.) Scientists, including social scientists, develop hypotheses about how the world works and then gather <em>evidence </em>to support or undermine those hypotheses. Whereas proof is black-and-white, evidence has shades of gray: it can be strong or weak, circumstantial or conclusive. My colleague Kiley Arroyo made a great courtroom analogy in response to my creative placemaking post: she wrote, “think of forensic analysis if you will. You&#8217;re not just going to look at where and how the bullet hit, but what it was shot from, where, by whom and why.” Our job as researchers is not to “prove” anything – instead, it’s to amass evidence in search of the truth.</p>
<p>The biggest problem that I see with most logic models is that they are <strong>too simple for what they are trying to describe</strong>, and thus consign us to amassing a whole bunch of weak evidence. Logic models are often developed as much for communication purposes as for research, and can thus face intense pressure to be “dumbed down” for public consumption. I frequently hear comments like, “make sure that there are no more than five categories, because after that you’re going to lose people,” and God help you if your diagram doesn’t all fit on one page. But think about it, would you apply this standard to a budget? To a work plan? For a major, mission-critical project with many moving parts? I’m not interested in logic models as a communication tool; I’m interested in them as a means to <em>help us do our jobs better</em>.</p>
<p>In the case of ArtsWave, we do intend to collect data to show progress towards the goals that have been established through the model. But the task is less daunting than it looks. Because we are not trying to “prove” the model, not everything has to be measured directly; indeed, not everything has to be measured at all! Instead, the model serves as a road map for us in considering what is<em> most</em> important to measure, given what we don’t know. Is it that assumption that the arts can differentiate Cincinnati from its peer cities in the minds of tourists and potential residents? Can we be confident that people from diverse backgrounds will interact with each other if they happen to come together for the same community-wide event? A smart research design will test the assumptions that are most in need of testing, and the purpose of the modeling exercise is to identify which assumptions those are.</p>
<p>It’s not like we are all alone in this effort, either. There is an ever-growing body of literature on the ways that the arts interact with communities, and there is no need for us to demonstrate yet again connections for which strong evidence exists in other contexts. Furthermore, since many of the data points involve stakeholders beyond the arts, there is an opportunity to collaborate with other local entities to share resources and develop knowledge infrastructure collaboratively. Cincinnati is home to the <a href="http://www.strivetogether.org/">STRIVE initiative</a>, which has been made famous in the broader social sector as the poster child for the “<a href="http://www.ssireview.org/articles/entry/collective_impact">Collective Impact</a>” concept as coined by consultants from FSG Social Impact Advisors. One of STRIVE’s chief accomplishments has been the development of a “<a href="http://www.fsg.org/Portals/0/Uploads/Documents/PDF/Breakthroughs_in_Shared_Measurement_complete.pdf">adaptive learning system</a>” for use by hundreds of education nonprofits in the region, which has helped align the efforts of those organizations around a common set of purposes and benchmarks. If it worked in education, and in the same city no less, why can’t it work in the arts?</p>
<p>But that being said, none of the three benefits that I’ve cited above—articulation, flexibility, and transparency—require “proving” the logic model. They all come, at least in part, just from <em>creating </em>one. And creating a logic model doesn’t have to be a tortured, involved process. It doesn’t have to cost hundreds of thousands of dollars. You can write them out on a back of a napkin (okay, for some things you might need a really big napkin). Creating a logic model for something that will require a lot of your time or money is one of the most highly leveraged activities you can possibly undertake. I hope more arts funders will undertake it.</p>
]]></content:encoded>
			<wfw:commentRss>https://createquity.com/2012/06/in-defense-of-logic-models/feed/</wfw:commentRss>
		<slash:comments>15</slash:comments>
		</item>
	</channel>
</rss>
