[Createquity Reruns] The Deduction for Charitable Contributions: The Sacred Cow of the Tax Code?

(Tax policy week continues at Createquity with this doozy of an analysis from editorial-board-member-to-be John Carnwath from April 2013. Believe it or not, thanks to some stellar performance in Google search rankings, an article about tax policy is now Createquity’s most-read blog post of all time! And for good reason, as John’s article expands beyond the usual rhetoric and surfaces some creative solutions in the debate around the charitable tax deduction that just might satisfy everyone. -IDM)


photo by Martha Soukup

In his most recent budget proposal, President Obama is seeking to impose a cap on itemized deductions in the personal income tax return – which includes the deduction for charitable contributions. This provision, part of the administration’s strategy to raise revenue to pay for government spending, has been a part of every White House budget proposal since 2009, and every year arts advocacy organizations join the rest of the nonprofit sector in opposing the changes. So far, the cap has been successfully warded off, but there’s growing concern that if Republicans and Democrats ever agree on sweeping tax reforms, the charitable deduction will be on the chopping block. The fear that limiting the tax deduction will lead to reduced donations to charitable organizations is particularly great this year due to the tax increases that were passed at the end of 2012, prompting the Charitable Giving Coalition to step up its resistance with a new website: protectgiving.org.

While it’s become a popular strategy on Capitol Hill to complain about the lack of progress while refusing to budge from one’s own policy positions, a case can be made that the nonprofit sector’s lobbying on behalf of the charitable deduction has neither improved the financial stability of the sector nor created greater legislative security. At best, it has limited the declines in individual giving in recent years. So rather than simply digging our heels as we head into the next round of budget debates, let’s take a moment to explore a broader range of policy options and see which might make the most sense for the arts.

Before we get to that, though, here’s a refresher on the mechanics of the charitable tax deduction for anyone who needs it.


What is the charitable deduction and how does it work?

The tax deduction for charitable donations was established in 1917, just four years after the federal income tax was introduced. While there have been some changes over the years, in its basic form this provision allows taxpayers to deduct donations to nonprofits and charities from their taxable income. So if a taxpayer earns $50,000 and gives $2,000 to charity, she only has to pay taxes on $48,000. The rationale behind this provision was initially that the taxpayer who gives away $2,000 doesn’t have that money available to spend on herself, so it shouldn’t be counted as part of her income. Nowadays, the deduction is more commonly thought of as an incentive dangled before taxpayers to coax them into donating more money to charity. By allowing taxpayers to deduct charitable donations from their taxable income, the government essentially agrees to pay for a portion of the donation.

Think about it this way: If you earn $1,000 and you’re taxed at a rate of 30%, you have to pay $300 to the IRS and you end up with $700 in your pocket. But if you donate $100 to charity, your taxable income is reduced to $900. Your tax bill then comes to $270 ($900 x 30%). In return for giving $100 dollars to charity the government reduces your taxes by $30, so in the grand scheme of things that $100 check that you write to your favorite opera company really only sets you back $70.


Who benefits from the charitable deduction?

While this all sounds great in principle, there’s a big catch: not all taxpayers benefit from the charitable deduction. Initially the income tax only applied to a rather small number of wealthy Americans, but during World War II it was expanded to affect roughly 75% of the population. Instead of having all of these tax filers list their deductions individually—$42 for prescription medicine here, a $100 donation to a museum there—the IRS introduced the “standard deduction” in 1944. The standard deduction lets all filers lower their taxable income by a fixed amount. For the 2012 tax year that amount is $5,950 for single taxpayers and $11,900 for couples. That means that you only have to keep track of your deductions and itemize them on your income tax return if they exceed $5,950 (or $11,900 if you’re married). That saves a lot of taxpayers (not to mention the IRS) a huge headache, but it also means that the 70% of filers who take the standard deduction don’t get to write off their charitable donations. (One might argue that the non-itemizers benefit from the charitable deduction in a roundabout way, since a typical deduction for charitable donations was factored in when the standard deduction was calculated back in 1944, but the fact remains that the current deduction for charitable contributions and any changes to it are only relevant to about 1/3 of American tax filers.)

For those who do itemize deductions, the amount of the government’s subsidy towards charitable donations depends on the filer’s marginal income tax rate. If you’re in the 35% bracket and you donate $100 to a good cause, the government gives you $35, but if you’re in the 10% bracket you only get $10 back from Uncle Sam. Economists say that the “price of giving” is lower for the individual in the 35% bracket than for the one in the 10% bracket (e.g. note 1 here). Giving $100 to charity “costs” the former (presumably richer) person $65 and the latter $90. While this seems sort of unfair, it’s the result of having a progressive income tax system in which those who earn a lot pay a larger percentage of their incomes into the public purse.

This means wealthy taxpayers not only have more money in their bank accounts to give away, but when they donate to charity the government covers a larger portion of their donations. It is therefore no surprise that the rich are responsible for a large share of charitable giving. Although only 3% of tax filers have annual incomes over $200,000, those households contribute 36% of the money that individuals give to charity every year—a total of $73 billion in 2008. However, the federal government foots the bill for about a third of those donations through the deduction for charitable contributions (assuming that most of the individuals with incomes over $200,000 are in tax brackets with marginal rates over 30%).

One might say, “well it’s all for a good cause, so it doesn’t really matter if the government is paying for a portion of the donations,” but it turns out that taxpayers with high incomes choose to give their money to different causes than those who are less well-off, and the charitable deduction allows them to divert large amounts of government funds to their favorite organizations. The wealthy support educational institutions and the arts to a much greater extent than poor people, who tend to focus their giving on basic needs and religious organizations. The extent to which the arts depend on donors with high incomes for their contributions is quite striking. In 2005, 94% of the funds that arts organizations received through individual contributions came from households with annual incomes over $200,000.

Of course, the donors are not the only ones who benefit from the tax deduction. All of the people who receive services from nonprofits and charities may be considered indirect beneficiaries of this provision in the tax code. However, to determine whether the charitable deduction is the best way for the government to support the work of nonprofits we must take a closer look at the incentives that are created and how people respond to them.


Do donors respond to tax incentives?

The deduction for charitable contributions affects taxpayers in two different ways. On the one hand, we have the “price effect.” As noted above, higher marginal tax rates reduce the price of giving, creating a bigger incentive to contribute to charities. However, high marginal tax rates also mean that people have less money left in their pockets after paying their taxes. In general, if people’s incomes are reduced, one would expect them to become less generous donors. After paying for rent, food, and utilities, they have less money left over for nonessentials like vacations and charitable donations. This is called the “income effect.” Note that the income and price effects work in opposite directions. Higher marginal tax rates incentivize donations through the price effect, but they simultaneously create a disincentive through the income effect.

Several economists have examined donors’ responsiveness to tax incentives over the past few decades, but the results remain inconclusive. Most studies find that donors respond to tax incentives, but the historical record shows that the level of charitable contributions remains relatively constant over time when measured as a proportion of GDP regardless of the available tax incentives. Some studies suggest that higher-earning taxpayers are more responsive to the incentive than those who are less well-off and that there are differences between types of charities (religious, social, educational, etc.) that receive donations. Many policy analyses (CRS, CBO, TPC) therefore calculate the upper and lower limits of a range into which the effects of proposed policy changes are expected to fall rather than a specific estimate.


Considering policy options: goodbye deduction?

To establish the worst-case scenario as a baseline, one might ask what would happen if the charitable deduction were eliminated completely. Independent Sector, an advocacy organization for nonprofits and charities, recently put out a list of FAQs according to which “with no deduction for charitable gifts, itemized charitable giving would drop by between 25 percent and 36 percent total.” This assertion is rather misleading. The study from which Independent Sector gets these numbers states that a taxpayer in the 30% income tax bracket might reduce his contributions by 25-36% if the deduction were eliminated. Since the incentive to donate depends on the filer’s marginal tax rate and 98% of households face rates under 30%, the reduction in the total amount of individual contributions is likely to be much smaller than Independent Sector suggests.

The truth is, we have no idea what would happen if the tax deduction were eliminated. Not only have studies of the price and income effects been inconclusive, but they are all based on observations of how donors have reacted to incremental changes in tax rates and deductibility in the past. These estimates may be useful in predicting the effect of small changes within the range of what’s been observed in the past, but there’s no reason to be believe that the response would be the same once the government’s incentive approaches zero. In fact, economic theory would predict that it’s not the same.

For example, if the deduction were eliminated completely, one might expect some donors to dig deeper into their pockets to keep their favorite charities afloat. However, some wealthy Republicans might cease all charitable donations to protest the fact that they’re having to pay more taxes, secretly hoping to blame the financial hardships of the charitable sector on the Democrats in the next elections. These types of reactions are difficult to predict. One thing is certain: if the indirect subsidy that the government provides through the charitable deduction were eliminated in order to reduce the deficit, individual donors would have to dig deeper into their pockets to sustain nonprofits at their current level of activity. And if the entire nonprofit sector were in severe financial distress, one can easily imagine that some donors would reallocate their gifts towards hospitals and basic social services, compounding the impact on the arts.


Capping the deduction

The good news is that no one has proposed eliminating the deduction altogether. Obama’s 28% cap on deductions, on the other hand, remains a very real possibility.

Obama suggests that the government could increase its revenue by capping deductions at 28% of the donor’s AGI. As mentioned above, the size of the tax incentive is generally determined by the marginal tax rate that taxpayers incur, but Obama’s proposal sets 28% as the maximum anyone can claim. For the vast majority of households, this would be of no consequence. If you’re in the 10%, 15%, or 28% tax brackets, you still get your deduction as normal. But the 2% of filers who itemize their deductions and face marginal tax rates over 28% would no longer be able to reduce the tax on their donations to zero. People in the 30% bracket, for example, would still have to pay a 2% tax on their charitable gifts. They owe 30% according to their tax bracket and they only get 28% back on the donated amount (due to the cap), so the IRS gets to keep the 2% difference.

How might this cap affect contributions to charitable causes? The short answer is that it will most likely result in a minor, but noticeable reduction in contributions. Here’s what people are saying:

  • The Center on Philanthropy at Indiana University estimates that the cap will lead to an $820 million (0.4%) reduction in charitable giving in the first year of implementation, increasing to $1.31 billion (0.7%) in the second year.
  • In 2010 the Congressional Research Service put the decline in charitable giving in the 0.16 – 1.28% range.
  • In a back-of-an-envelope calculation for the Washington Post, Harvard economist Martin Feldstein estimates that the 28% cap could reduce charitable giving from individuals by $7 billion, which amounts to a 3% decline (relative to the $230 billion in charitable contributions from individuals reported in Giving USA 2009).
  • Len Burman of the Tax Policy Center and the Center on Budget and Policy Priorities came up with similar figures in 2009.

Taking all of this together, it seems we’re talking about a 0.5% to 3% decline in gifts from individuals.

The impact on arts nonprofits is likely to be a little bit higher than that, since the cap will primarily affect the wealthy taxpayers who contribute most to the arts. The 2010 study by the Congressional Research Service includes an analysis of how the 28% cap would affect different segments of the nonprofit sector. It estimates the reduction in individual giving to the arts to be around 2.4% (compared to 0.16-1.28% overall).

The figures above were calculated based on the tax rates that applied between 2003 and 2012, but as we know, the tax rate for the highest income bracket was increased from 35% to 39.6% at the beginning of this year. How does that change things? If charitable contributions remain fully deductable, we would expect the higher marginal tax rates to increase donations due to the price effect. However, if Obama’s proposal to cap total deductions goes through, the reverse is to be expected—the higher tax rates actually exacerbate the decline in charitable giving caused by the cap. That’s because the higher tax rates reduce the taxpayers’ disposable income, bringing the income effect into play, while the cap on deductions holds the price of giving constant.

The Congressional Research Service estimates that the combined effect of the 28% cap on deductions and the higher marginal rates that Obama sought to impose on taxpayers earning more than $200,000 would reduce giving by 0.28% to 2.27%. That’s almost double the decline that they estimated for the cap on deductions alone (see above). The Center on Philanthropy arrives at similar figures when including Obama’s proposed tax hikes. Those projections still fall within the 0.5% to 3% range mentioned above. If we take the worst-case scenarios for the 28% cap and the largest estimates for the impact of the of the higher tax rates, we might be looking at a 5 or 6% decline in charitable giving.

So it looks like we don’t need to fear that individual contributions will drop by a quarter if the 28% cap were introduced, with or without increases in the top marginal tax rates. Nonetheless, a 5-6% decline is nothing to take lightly, and for organizations that are already reeling from the recent recession even a modest reduction in individual contributions could be the final straw. Moreover, the estimates apply to total charitable donations nationwide, but individual organizations could be unlucky and find that several of their major benefactors scale back their contributions more drastically than the national average, leaving gaping holes in their budgets.


Other options: expanding to non-itemizers and adding “floors”

Faced with this uncertainty, the response from arts advocacy organizations has been to dig in their heels and demand that the deduction for charitable contributions remain intact. However, as Michael Rushton notes, there’s little reason to believe that there’s anything magical about our current tax code; in fact, the charitable deduction has been criticized in the past for several reasons (notably for being inefficient, regressive, and having an unclear theoretical justification). So instead of clinging to the status quo as our only hope for survival, we might ask: what changes to the current system would lead to the best outcomes for arts organizations? How might we incentivize charitable donations while supporting the government’s goal of reducing the federal deficit?

In 2011 the Congressional Budget Office came up with 11 different policy scenarios and estimated their likely impact on tax revenue and charitable giving. These included:

  • allowing all taxpayers to write off charitable gifts on their tax returns, rather than just those who itemize deductions
  • creating a minimum donation (either a fixed dollar amount or a percentage of the donor’s AGI) which would have to be exceeded to qualify for the deduction
  • converting the deduction into a tax credit (which would give all taxpayers the same 15 or 25% tax break on charitable contributions instead of linking it to the donor’s marginal tax rate)

This study found that by extending the deduction to all filers and simultaneously establishing $500 ($1,000 for couples) as the minimum donation required to qualify for the deduction the government would be able to increase revenues by $2.5 billion annually, while boosting contributions to charitable causes by $800 million. Or even better, by replacing the deduction with a 25% tax credit for all taxpayers, the government would save almost the same amount, while driving up donations by 1.5%.

Since the government’s objective right now is to reduce the deficit, presumably without harming the nonprofit sector unnecessarily, Eugene Steuerle of the Tax Policy Center has advocated for expanding the tax deduction to all filers, with a minimum contribution of 1.7% of the donor’s AGI required to qualify. This would net the government between $10.4 billion and $11 billion per year without reducing charitable donations by a dime. The argument for establishing a minimum contribution to qualify (often referred to as a “floor”) is that people are likely to give a small amount of money to charity regardless of whether they receive a tax break or not. It’s therefore not necessary for the government to forgo any revenue for that portion of their contributions. Further, at a certain point the administrative costs of tracking small donations—acknowledging their receipt, submitting documentation to the IRS, checking for fraud—is not worthwhile. For those who object that a $1,000 donation is a far bigger sacrifice for a couple that only earns $20,000 a year than for a millionaire, a floor that is linked to the taxpayer’s AGI might pose an attractive alternative. With a 2% floor, someone earning $20,000 could claim the deduction by making a $400 donation, while someone earning $500,000 would have to donate $10,000 to qualify.


Beyond the bottom line

Reforming the charitable tax deduction might offer other benefits as well. For example, it could provide an opportunity to change the composition of our donor lists. By giving those in lower income categories greater incentives to support our work and allowing them to leverage some of the indirect subsidy that the government provides through its tax breaks, arts organizations might be able to diversify the ranks of their donors, so as to be less dependent on a small wealthy elite. Based on the CBO’s estimates, by replacing the tax deduction with a 25% credit that is subject to a low floor (say 1% of AGI), it should be possible to maintain charitable donations at their current levels or even increase them slightly while saving the government several billions of dollars annually and allowing donors from lower income categories to acquire a bigger stake in nonprofit arts organizations. A more diverse pool of donors, both in terms of their economic status and their tastes, would reduce the financial risk of artistic experimentation and could allow companies to diversify their programming in ways that their current (predominantly wealthy) donors might not support.

All in all, reforming the deduction on charitable contributions isn’t necessarily a bad thing for the arts. There are ways of changing the tax code that could actually increase revenues and diversify the sources of income for arts organizations, even while helping to reduce the federal deficit. Since any change creates uncertainty and will likely produce losers as well as winners, I can understand arts administrators and advocates who would rather stick with an imperfect status quo than commit their careers and their organizations to an uncertain future. However, I believe that participating in the discussion and shaping the outcomes to fit our sector’s interests will ultimately prove more productive than trying to block change from the start.

Leave a comment

[Createquity Reruns] Federal arts funding: a trace ingredient in the sausage factory of government spending

(Following a brief hiatus, our summer rerun programming returns this week with an homage to that wonkiest of topics – tax policy! In this post from June 2011, Createquity Fellow Aaron Andersen breaks down how the arts fit into the federal budget and puts them in context with tax breaks offered to other special interests, including private industry. “Federal arts funding” later had the honor of receiving Createquity’s first citation in a Wikipedia article. -IDM)

As has been previously reported, public funding for the arts is one of the many foci of our national debate over fiscal policy. While funding cuts for the National Endowment for the Arts and the National Endowment for the Humanities (and potential but unrealized cuts at the Smithsonian) all made national headlines, the Corporation for Public Broadcasting, which unwillingly and inaccurately functions as a proxy for NPR in the public imagination, was the hottest of the hot potatoes. The House of Representatives voted to defund the CPB entirely, but in the end, appropriations were essentially unchanged from the year before. This may seem like a dead issue for the moment, but there is an extremely good chance these battles will resurface in the fiscal 2012 budget process.

Federal arts funding is a small share of the budget
So, how much money are we talking about? The CPB is getting $455 million (of which about $90m goes to radio stations). The Smithsonian gets the biggest federal arts allocation, at $761m. If you add all arts-related federal programs together, funding for the current fiscal year totals just over $2.5 billion. Honestly, that number looks pretty large to me. I can’t imagine what a billion of anything really looks like. But the total federal budget for this fiscal year (which runs through September) is $3.82 trillion. So the federal arts funding we’ve identified is 0.066% of the total federal budget. And when we’re only shouting about CPB, we’re talking about 0.012%. That is twelve one-thousandths of one percent.

So, why? Why was this a central topic in budget debates this spring? Was it really all about this James O’Keefe scandal? Are we going to rehash the entire set of arguments again this summer and fall as we debate the fiscal 2012 budget?

Maybe the real reason we keep putting Elmo’s head on the chopping block is because we don’t really understand the numbers, after all. According to a CNN poll, most Americans do not think the CPB gets twelve one-thousandths of one percent of the budget. Actually, only 27% of those surveyed believe the CPB gets less than 1%1 of the total budget. 40% believe the CPB gets 1-5%. Everyone else believed the appropriation to be greater than 5%, and an astonishing 7% of those surveyed believe the Corporation for Public Broadcasting gets more than 50% of the budget (which would have to be close to $2 trillion). If that were true, it would put the pledge premium tote bag and mug industry completely out of business, and This American Life would be hosted by Robin Leach. The survey also asked whether funding for different funding categories should be increased, decreased, kept the same, or eliminated. 16% of respondents wanted CPB funding to be totally eliminated.

We care about other spending, too, right?
It’s interesting to examine the other spending categories in the poll. The survey asked about Medicare, Medicaid, Social Security and defense. Those are pretty big portions of the budget, so it makes perfect sense they’d be in the list. The rest of the categories are decidedly different: foreign aid (also highly overestimated by respondents), benefits for retired government workers, food and nutrition assistance for poor people, housing for the poor, and federal education funding. And that’s it.

But, what’s missing here? Quite a lot, actually. What about subsidies to the oil & gas industry, which the Obama administration claims add up to $4 billion (about 9 times CPB funding)? What about direct subsidies for farmers, which were about $5 billion last year? Tax exemptions for ethanol production aren’t mentioned, either. Nor are the $8.5 billion in subsidies given to the airlines since 9/11 simply to help them survive. These subsidies went to for-profit industries, which are theoretically subject to the rigor of the free market and exist for the profit of their shareholders. And yet, more discussion is generated by $2.5 billion in subsidies to arts organizations, both governmental and non-profit, that explicitly exist for the public benefit and do not have shareholders.

Why didn’t CNN ask about mortgage interest tax deductions of $88.5 billion in 2008, 200 times this year’s CPB funding? What about first time home buyer and hybrid vehicle tax credits? In 2008, contributions to employee retirement and pension funds, and tax deferrals on the earnings in those funds, lost the federal government $117.7 billion in tax revenue. There are many, many more examples. Decisions to fund and subsidize these sectors of the U.S. economy are just as important as decisions about arts funding. And the amounts are significantly higher than the $2.5 billion of federal arts funding in question.

How is it so easy for Congress to ignore all of this during budget battles, and instead focus on whether Juan Williams should have been fired or not? One reason is that subsidies can easily be swept under the rug, when the rug is the tax code.

Tax breaks: spending with less scrutiny
Tax deductions and credits, also called tax expenditures, are a form of government spending, as we can see clearly from the now-ended hybrid vehicle tax credits. The federal government wanted to provide incentives for the purchase of fuel-efficient vehicles. It would have cost the federal government about the same to send a check directly to hybrid buyers, perhaps processed at the dealerships, as it cost to reduce tax bills by the same amount. (Transaction and processing costs might be different, but the bulk of the cost would have been the same.)

The difference between tax expenditures and direct spending is that the former are not part of large budget bills, the kind that can shut down the government if not passed. Tax expenditures can certainly be treated as political footballs. But they are far less likely to be at the center of a showdown. Not only that, if Congress adds a tax expenditure in some legislation, it is a spending increase that can be framed to look like a tax cut, because it reduces tax revenues. That makes it more politically palatable for both parties, even if it has nothing to do with taxed income, and even if it distorts markets.

Consider a very large tax expenditure: $88.5 billion (2008 figure) worth of mortgage interest tax deductions (almost 200 years of Corporation for Public Broadcasting funding). Interest you pay on your mortgage gets deducted from your taxable income. Thus, if you’re comfortably into the 25% tax bracket, this tax expenditure is worth a quarter of what you paid in mortgage interest during the year. This creates an explicit incentive for people to buy their own homes by borrowing. If creating a home ownership and borrowing incentive sounds a little off to you, you might be recalling the financial collapse that precipitated the Great Recession. Like the repackaging of loans into mortgage-backed securities that contributed to the housing price bubble of the previous decade, this deduction effectively makes borrowing cheaper. If borrowing is cheaper for everybody, then everybody has more to spend, and if everybody would like to buy a somewhat nicer or bigger home if they could, then all the home prices are simply going to increase. This deduction, therefore, distorts the market and leads to increased prices. Who benefits if all prices in the market are inflated to take advantage of this deduction? It helps the realtors, who get paid a percentage of the sale price. And it helps the home building industry.

What is the point? Do I want a repeal of this tax deduction? Personally, no, I’ve already got a mortgage, and of course I want to keep my deduction. It is simply important to understand that this is spending, too. The mortgage interest deduction is really a government spending program that encourages people to buy instead of rent, and has the unintended effect of inflating home prices. And it’s a pretty large one! Why don’t we publicly debate this spending program, which is 200 times greater than the CPB budget, and is of debatable long-term utility? We don’t have to talk about it, because it’s not in the budget. It’s in the tax code, so it looks like a way to reduce taxes, rather than a way to subsidize the home sale industry.

Reframing the conversation
If subsidized arts workers are labeled as something like freeloaders in public discourse, then farmers, homeowners, hybrid vehicle buyers, the airlines, and the oil & gas industry are freeloaders too. Ayn Rand is very popular again among conservatives, so where is the conservative outcry against oil & gas subsidies? Instead, we are offered a redefinition of the “free market capitalist system” as something that requires government subsidy. Oxymorons rule the day when the free market must be subsidized, and arts created explicitly in the public interest, without a profit to distribute, must stand alone.

The issue of arts funding is quite likely to be revived when the fiscal 2012 budget is to be presented this fall. Conservative legislators have been able to score political points with this issue for years. But we have also seen President Obama bring greater scrutiny to bear on oil industry tax breaks, and he was making political progress in April. If uncertainty regarding oil supplies in the Middle East fades by this fall, we can perhaps expect that some part of the government spending conversation will deal with oil and gas tax expenditures.

Arts advocates, however, should not sit on our hands and wait for the President to shift the focus to federal subsidies of other industries in the budget and tax code. We are supposed to be very good at telling stories, so we ought to thoughtfully study our budget and tax code and engage with our citizenry on those issues that are most relevant and significant. It’s not just a matter of self-interest, though that is obviously part of the equation. America’s budget deficit and public debt is ours. And when we only discuss federal budgets when we launch a campaign to save our NEA grants and Sesame Street, we are lending legitimacy to those who would focus on the nickels and dimes while ignoring the big budgetary issues. We have the capacity for wider scope.

1. If you look at the wording of the poll question, you can see it is potentially a bit misleading. It asks, “Just give me your best guess — you can pick any number from one percent to a hundred percent, or if you think it was less than one percent, you can say that too.” The question first asks people to choose between 1-100%, so it anchors the idea of whole number percentages in the listener’s mind, then offers a less than 1% option as an alternative, after they’ve already framed the question in terms of whole number percentages.

Leave a comment

We’re Fixing Up the Place

Now that you’ve all helped us reach our Indiegogo campaign goal (thanks again!), it’s time to start putting that money to good work. We just returned from our staff retreat (more updates coming soon for those on the “Early Warning” list), and now we’re beginning the search for someone to help us redesign createquity.com.

It’s been four years since our last redesign by the excellent Evan Stein/VANPOP, and much has changed in the world of Createquity, and digital strategy. We’re in need of a new logo, new features for the site, and a refreshed design aesthetic. More details for technical requirements and the proposal process in the RFP, but if you know of a designer/developer with experience adapting WordPress themes, please help us reach out to them this week. We’re looking for a response to the RFP by this Friday, July 18.

We want this redesign to better support our expanded goals for Createquity, as well as your needs, dear reader. Help us help you, and pass on any tips for excellent digital professionals.

Leave a comment

Thank you!

All I can say is WOW. In the last 40+ hours of our campaign, more than $4,000 came in from 33 donors to put us over the top and then some. In the end, we blew past our goal, raising $11,430 (counting an employer match that was not included in the Indiegogo totals). Thank you to everyone who made this possible!

We’ll be following up with all of our donors individually, but I want to briefly thank a few folks in particular. First, I want to acknowledge our contributors at the Founding Sponsor level, MailChimp and Pamela York Klainer, without both of whom it would have been much harder for this campaign to meet its goal. You’ll be seeing their sponsor credits on our About page when we relaunch. Thomas Cott, Diane Ragsdale, Nina Simon, and Andrew Taylor all enhanced our campaign in special ways by either recording videos of support or donating items for us to offer as perks, and provided invaluable “social proof” for this enterprise. Our entire network of people who have been directly involved with Createquity came through huge, both contributing and spreading the word: 100% of our editorial team donated, along with many of our former Writing Fellows and guest bloggers, and we also received amazing support from coworkers, bosses, and current and former interns. And finally, I want to give a special thank you to my fellow editorial team member Jackie Hasa, who was incredibly helpful and effective in managing the campaign from behind the scenes.

There’s no time to sit back and savor the moment: all this while as the campaign has been going on, we’ve been hard at work planning an editorial retreat that is set to take place this weekend. Stay tuned for more announcements as we start to put these resources into action. We promise it will be money well spent.

Leave a comment

[Createquity Reruns] Solving the Underpants Gnomes Problem: Towards an Evidence-Based Arts Policy

(Arts Research Week at Createquity concludes with this speech/post originally delivered at the University of Chicago’s Cultural Policy Center on November 14, 2012 and published on the blog in February 2013. This diagnosis of how our arts research infrastructure is failing us, a vision for how we could fix it, and why it all matters – a lot – is emblematic of the more advocacy-driven approach we intend to take upon our relaunch in the fall. I’m glad to say that there has been progress on some of these recommendations even in just the past year and a half, in particular the formation of the Cultural Research Network to connect researchers with each other and start the process of field-building. Another reason this talk is significant is that it led to my first connection with current Createquity editorial team member John Carnwath! -IDM)

The actual lecture portion of this talk occupies the first 52 minutes of the video, and the first 27 of those minutes are a recap/synthesis of material that will be familiar to regular readers of this blog (specifically, Creative Placemaking Has an Outcomes Problem and In Defense of Logic Models). Since I didn’t write out the speech in advance, I don’t have a transcript for it. However, below is a reconstruction of the new material from my notes, so you can get a taste for it if you don’t have time to watch the whole thing right now. (You’ll notice I make a number of generalizations in the speech about the ways in which arts practitioners interact with research. These are based on observation and personal experience, and are best understood as my working hypotheses.)


[starting at 26:55]

Why is this integration between data and strategy important? Because research is only valuable insofar as it influences decisions. This is why logic models are awesome – they are a visual depiction of strategy. And there is no such thing as strategy without cause and effect. Think about that for a second. Our lives can be understood as a set of circumstances and decisions. We make decisions to try to improve our circumstances, and sometimes the circumstances of those around us. Every decision you make is based on a prediction, whether explicitly articulated or not, about the results of that decision. Every decision, therefore, carries with it some degree of uncertainty. This uncertainty can be expressed another way: as an assumption about the way the world works and the context in which your decision is being made. These assumptions are distinguished from known facts.

If you can reduce the uncertainty associated with your assumptions, the chances that you will make the right decision will increase. So, how do you reduce that uncertainty? Through research, of course! Studying what has happened in the past can inform what is likely to happen in the future. Studying what has happened in other contexts can inform what is likely to happen in your context. And studying what is happening now can tell you whether your assumptions seem spot on or off by a mile. Alas, research and practice in our field are frequently disconnected in problematic ways. Six issues are preventing us from reaching our potential.

Issue #1: Capacity

Supply and demand apply as much to research as it does to artists. There are far more studies out there than a normal arts professional can possibly fully process. I wish I could tell you how many research reports are published in the arts each year, but nobody knows! To establish a lower bound, I went back over last year’s [2011] “around the horn” posts, which report new research studies that I hear about. I counted at least 41 relevant arts-research-related publications – a tiny fraction, I’m sure, of total output. To make matters worse, research reports are long, and arts professionals are busy. For the Createquity Writing Fellowship program, participants are required to analyze a work of arts research for the Createquity Arts Policy Library. I collect data on how long it takes to do this, and consistently, it requires 30-80 hours to research, analyze and write just one piece! Multiply this by the number of new studies each year, and you can start to see the magnitude of the problem.

Issue #2: Dissemination

Which research reports is an arts practitioner likely to even know about? Certainly not all of them, because there is almost no meaningful connection between the academic research infrastructure and the professional arts ecosystem. Lots of research relevant to the arts is published in academic journals each year, but unless the faculty member was commissioned to do their work by a foundation, we never hear about it. Academic papers are typically behind a pay firewall, and most arts organizations don’t have journal subscriptions. To give an example, after I wrote about Richard Florida’s Rise of the Creative Class, Florida pointed me to a study in two parts by two Dutch researchers. It’s one of the best resources I’ve come across for creative class theory, but I’ve never heard anyone even mention either study other than him and me.

Issue #3: Interpretation

Research reports inevitably reflect the researcher’s voice and agenda. This is especially true of executive summaries and press releases, which is often all anyone “reads” of research “reports.” Probably the most common agenda, of course, is to convey that the researcher knows what he/she is talking about. Another common agenda is to ensure repeat business from, or at least a continuing relationship with, the client who commissioned the study. The reality, however, is that research varies widely in quality. There’s no certification process; anyone can call themselves a researcher. But even highly respected professionals can make mistakes, pursue questionable methods, or overlook obvious holes in their logic. And, in my experience, the reality of any given research effort is usually nuanced – some aspects of it are much more valuable than others. Unfortunately, many arts professionals lack expertise to properly evaluate research reports, not having had even basic statistics training.

Issue #4: Objectivity

Research is about uncovering the truth, but sometimes people don’t want to know the truth. Advocacy goals often precede research. How many times have you heard somebody say a version of the following: “We need research to back this up”? That statement suggests a kind of research study that we see all too often: one that is conducted to affirm decisions that have already been made. By contrast, when we create a logic model, we start with the end first: we identify what we are trying to achieve and only then determine the activities necessary to achieve it.

Here are a bunch of bad, but common reasons to do a research project:

  • To prove your own value.
  • To increase your organization’s prestige.
  • To advance an ideological agenda.
  • To provide political cover for a decision.

There is only one good reason to do research, and that is to try to find out something you didn’t know before.

Issue #5: Fragmentation

The worst part of the problem I just described is that it drives what research gets done – and what doesn’t get done. There is no common research agenda adopted by the entire field, which is a shame, because collective knowledge is pretty much the definition of a public good: if I increase my own knowledge, it’s very easy for me to increase your knowledge too. The practical consequences of this fragmentation are severe. It results in a concentration of research using readily available data sources (ignoring the fact that the creation of new data sources may be more valuable). It results in a concentration of research in geographies and communities that can afford it, because people don’t often pay for research that’s not about them. And it results in a concentration of research serving narrow interests: discipline-specific, organization-specific, methodology-specific. My biggest pet peeve is that research is almost never intentionally replicated – everybody’s reinventing the wheel, studying the same things over and over again in slightly different ways. A great example of a research study crying out for replication is the Arts Ripple Effect report, which I talked about earlier. The results of that study are now guiding the distribution of millions of dollars in annual arts funding. Are those results universal, or unique to the Greater Cincinnati region? We have no way to know.

Issue #6: Allocating resources

Everyone knows there’s been a trend in recent years towards more and more data collection at the level of the organization or artist. Organizations, especially small ones, complain all the time about being expected to do audience surveys, submit onerous paperwork, and so forth. And you know what, I agree with them! You might be surprised to hear me say that, but when you’re talking about organizations that have small budgets, no expertise to do this kind of work, and the funder who is requesting the information is not providing any assistance to get it…just take a risk! You make a small grant that goes bad, so what? You’re out a few thousand dollars. The sun will rise tomorrow.

As an example of what I’m talking about, I participated in a grant panel recently. I enjoyed the experience, and am glad I did it, but there’s one aspect of the experience that is relevant here. There were seven panelists, and we were all from out of town. Each of us spent, I’d say, roughly 40 hours reviewing applications in advance of the panel itself. Then we all got together for two full days in person to review these grants some more and talk about them and score them. We did this for 64 applications for up to $5,000 each, and in the end, 92% 94% were funded.

So consider this as a research exercise. The decision is who to give grants to, and how much. The data is the grant applications. The researchers are the review panel. What uncertainty is being reduced by this process? How much worse would the outcome have been if we’d just taken all the organizations, put them into Excel, run a random number generator, and distributed the dollars randomly up to $5,000 per organization? And I’m not saying this to make fun of this particular organization or single them out, because honestly it’s not uncommon to take this kind of approach to small-scale grantmaking. And yet if you compare it to ArtPlace’s first round of grants, theoretically they had thousands of projects to choose from, and they gave grants up to $1 million for creative placemaking projects – but there was no [open] review process; they just chose organizations to give grants to. So there’s a bit of a mismatch in the strategies we use to decide how to allocate resources.

There’s a concept called “expected value of information” described in a wonderful book called How to Measure Anything, by Douglas W. Hubbard. It’s a way of taking into account how much information matters to your decision-making process. In the book, Hubbard shares a couple of specific findings from his work as a consultant. He found that most variables have an information value of zero; in other words, we can study them all we want, but whatever the truth is is not going to change what we do, because they don’t matter enough in the grand scheme of things. And he also found that the things that matter the most, the kinds of things that really would change our decisions, often aren’t studied, because they’re perceived as too difficult to measure. So we need to ask ourselves how new information would actually change the decisions we make.

There is so much untapped potential in arts research. But it remains untapped because of all the issues described above. So what can we do about it?

First, we need a major field-building effort for arts research. Connecting researchers with each other through a virtual network/community of practice would help a lot. So would a centralized clearinghouse where all research can live, even if it’s behind a copyright firewall. The good news is that the National Endowment for the Arts has already been making some moves in this direction. The Endowment published a monograph a couple of months ago called “How Art Works,” the major focus of which was a so-called “system map” for the arts. But the document also had a pretty detailed research agenda for the NEA, not for the entire field, that lays out what the NEA’s Office of Research and Analysis is going to do over the next five years, and two of the items mentioned are exactly the two things I just talked about: a virtual research network and a centralized clearinghouse for arts research.

This new field that we’re building should be guided by a national research agenda that is collaboratively generated and directly tied to decisions of consequence. The missing piece from the research agenda in “How Art Works” is the tie to actual decisions. Instead it has categories, like cultural participation, and research projects can be sorted under those buckets. But it’s not enough for research to simply be about something – research should serve some purpose. What do we actually need to know in order to do our jobs better?

We should be asking researchers to spend less time generating new research and more time critically evaluating other people’s research. We need to generate lots more discussion about the research that is already produced. That’s the only way it’s going to enter the public consciousness. Each time we fail to do that, we are missing out on opportunities to increase knowledge. It will also raise our collective standards for research if we are engaging in a healthy debate about it. But realistically, in order for this to happen, field incentives are going to have to change – analyzing existing research will need to be seen as equally prestigious and worthy of funding as creating a new study. Of course, I would prefer if people are not evaluating the work of their direct competitors – but I’ll take what I can get at this point!

Every research effort should take into account the expected value of the information it will produce. Consider the risk involved in various types of grants made. What are you trying to achieve by giving out lots of small grants, if that’s what you’re doing? Maybe measure the effectiveness of the overall strategy instead of the success or failure of each grant. This is getting into hypothesis territory, but based on what I’ve seen so far I would guess that research on grant strategy is woefully underfunded, while research on the effectiveness or potential of specific grants is probably overfunded. We probably worry more than we need to about individual grants, but we don’t worry as much as we should about whether the ways in which we’re making decisions about which grants to support are the right ways to do that.

Finally, we should be open-sourcing research and working as a team. I’m talking about sharing not just finished products and final reports, but plans, data, methodologies as well. I’m talking about seeking multiple uses and potential partners at every point for the work we’re doing. This would make our work more effective by allowing us to leverage each other’s strengths – we’re not all experts at everything, after all! And it would cut down on duplicated effort and free up expensive people’s time to do work that moves the field forward.

I thank everyone for their time, and I’d love to take any questions or comments on these thoughts about the state of our research field.

(Enjoyed this post? Today is the last day of our campaign to make the next generation of Createquity possible. We’re thrilled to have reached our initial goal, but additional contributions are still welcome and will be put to good use in strengthening us for the future. Thank you for your support!)

1 Comment

Last Chance to Take Createquity to the Next Level!

Createquity readers, tomorrow is the final day in our Indiegogo funding campaign. Thanks to your generous contributions, as of this writing we have raised $7,385 from 93 funders toward our $10,000 goal. It’s been truly humbling to witness the number of people who care enough about high-quality information and analysis in the arts to contribute. And with just about 36 hours left in the campaign, it’s time to put the pedal to the metal to bring us over the top. If you believe we as a sector need better, data-driven advocacy, or simply appreciate Createquity as a resource for your work, please donate today!

One of the most gratifying things about this campaign so far has been seeing the wave of support we’ve received from people whose work is central to our field. Barry Hessenius, whose blog is another widely-read resource among arts managers, graced us last week with a completely unsolicited and glowing endorsement of this project:

I hope you will go to the Indiegogo site and support this effort.  I did….I can give you two good reasons why you might part with the cost of a couple of Starbuck’s half caffeine, double mocha, caramel, latte frappacinnos:  First:  Ian and the people he has assembled to help with his newest reinvention of his site are exactly the people we want to support in our field – young, smart, dedicated, committed people who are already making a contribution to the field to help make things better for everyone.  Supporting that alone ought to be worth ten or twenty bucks.  But Second, I can almost guarantee you that if you follow whatever Createquity does over the next year you will read two or more posts that you (you personally) will find of great value to what you are doing on your job.  That ought to be worth a few bucks, no?

And how often do you get to play Santa Claus in July?

Indeed, it’s been amazing to see the movers and shakers who find value in Createquity’s work. Every year, with the help of a pool of nominators, Barry compiles a list of the nonprofit arts sector’s 50 most powerful and influential leaders. More than a fifth of the 2013 list has contributed to our campaign so far. The show of support from our field has been extraordinary, with donations from star consultants like Holly Sidford (Helicon Collaborative), Alan Brown (WolfBrown), Adrian Ellis (AEA), Jerry Yoshitomi (MeaningMatters), Claudia Bach (AdvisArts), and Anne Gadwa Nicodemus (Metris Arts Consulting); arts organization leaders like Adam Huttler (Fractured Atlas), Laura Zucker (LA County Arts Commission), Mara Walker (Americans for the Arts), and Kemi Ilesanmi (The Laundromat Project); current and former foundation leaders like Kerry McCarthy (New York Community Trust), Angelique Power (Joyce Foundation), and Marian Godfrey (ret. Pew Charitable Trusts); and fellow arts thinkers and information mavens Doug McLennan (ArtsJournal), Nina Simon (Museum 2.0), Thomas Cott (You’ve Cott Mail), Andrew Taylor (The Artful Manager), and Diane Ragsdale (Jumper). The latter four have contributed to our campaign in particularly special ways: Thomas, Andrew, and Diane all were kind enough to record video testimonials for us (embedded below), and Nina is donating two rare signed copies of her classic read The Participatory Museum, which are available to donors at the $100 level. Grab ‘em fast!

I hope you agree with us that this is a pretty incredible list. Won’t you add your name to it and help us cross the finish line?

Leave a comment

[Createquity Reruns] Public Art and the Challenge of Evaluation

(Createquity’s summer rerun programming continues this week with a focus on arts research! This instant classic by Createquity Writing Fellow Katherine Gressel spread like wildfire when it was first published in January 2012, and remains our third-most popular post ever. It even brought us a bunch of new readers from Australia! [Long story.] While not a short read, it’s packed with useful information about how practitioners have gone about conceptualizing and evaluating one of the hardest beasts to measure – public art. -IDM)

Steve Powers, “Look Look Look,” Part of the “A Love Letter for You” project, commissioned by the Philadelphia Mural Arts Program, 2009-2010. http://www.aloveletterforyou.com

In the Spring/Summer 2011 issue of Public Art Review, Jack Becker writes, “There is a dearth of research efforts focusing on public art and its impact. The evidence is mostly anecdotal. Some attempts have focused specifically on economic impact, but this doesn’t tell the whole story, or even the most important stories.”

Becker’s statement gets at some of the main challenges in measuring the “impact” of a work of public art—a task which more often than not provokes grumbling from public art administrators. When asked how they know their work is successful, most organizations and artists that create art in the public realm are quick to cite things like people’s positive comments, or the fact that the artwork doesn’t get covered with graffiti or cause controversy.

We are much less likely to hear about systematic data gathered over a long time period—largely due to the seemingly complex, time-consuming, or futile nature of such a task. Unlike museums or performance spaces, public art traditionally doesn’t sell tickets, or attract “audiences” who can easily be counted, surveyed, or educated. A public artwork’s role in economic revitalization is difficult to separate from that of its overall surroundings. And as Becker suggests, economic indicators of success may leave out important factors like the intrinsic benefits of experiencing art in one’s everyday life.

However, public art administrators generally agree that some type of evaluation is key in not only making a case for support from funders, but in building a successful program. In the words of Chicago Public Art Group (CPAG) executive director Jon Pounds, evaluations can at the very least “help artists strengthen their skills…and address any problems that come up in programming.” Is there a reliable framework that can be the basis of all good public art evaluation? And what are some simple yet effective evaluation methods that most organizations can implement?

This article will explore some of the main challenges with public art evaluation, and then provide an overview of what has been done in this area so far with varying degrees of success. It builds upon my 2007 Columbia University Teachers College Arts Administration thesis, And Then What…? Measuring the Audience Impact of Community-Based Public Art.That study specifically dealt with the issue of measuring audience response to permanent community-based public art, and included interviews with a wide range of public artists and administrators.

This article will discuss evaluation more broadly—moving beyond audience response—and incorporate more recent interviews with leaders in the public art field. My goal was not to generate quantitative data on what people are doing in the field as a whole with evaluation (according to Liesel Fenner, director of Americans for the Arts’s Public Art Network, such data is not yet available, though it is a goal). Instead, I have reviewed recent literature on public art assessment, and interviewed a range of different types of organizations, from government-run “percent for art” and transit programs to grassroots community-based art organizations in New York City (where I am based) and other parts of the United States. I sought to find out whether evaluation is considered important, how much time is devoted to it, and the details of particularly innovative efforts.

The challenge of defining what we are actually evaluating

The term “public art” once referred to monumental sculptures celebrating religious or political leaders. It evolved during the mid-twentieth century to include art meant to speak for the “people” or advance social and political movements, as in the Mexican and WPA murals of the 1930s, or the early community murals of the 1960s-1970s civil rights movements. Today, “public art” can describe anything from ephemeral, participatory performances to illegal street art to internet-based projects. The intended results of various types of public art, and our capacity to measure them, are very different.

In the social science field, evaluation typically involves setting clear goals, or expected outcomes, connected to the main activities of a program or project. It also involves defining indicators that the outcomes have been met. This exercise often takes the form of a “theory of change.” Since there are so many types of public art, it is exceedingly difficult to develop one single “theory of change” for the whole field, but it may be helpful to use a recent definition of public art from the UK-based public art think tank Ixia: “A process of engaging artists’ ideas in the public realm.” This definition implies that public art will always occupy some kind of “public realm”–whether it is a physical place or otherwise-defined community—and require an “engagement” with the public that may or may not result in a tangible artwork as end result. This process and the reactions of the public must be evaluated along with whatever artistic product may come out of it.

The challenge of building a common framework for evaluation

In 2004, Ixia commissioned OPENspace, the research center for inclusive access to outdoor environments based at the Edinburgh College of Art and Heriot-Watt University, to research ways of evaluating public art, ultimately resulting in a comprehensive 2010 report, “Public Art: A Guide to Evaluation” (see a helpful summary by Americans for the Arts). The guide’s emphasis and content was shaped by feedback from Ixia’s Evaluation Seminars and fieldwork conducted by Ixia and consultants who have used its Evaluation Toolkit. Ixia provides the most comprehensive resources on evaluation that I have encountered, with two main evaluation tools, the evaluation matrix and the personal project analysis. These are helpful as a starting point for evaluating any project or program.

The matrix’s goal is to “capture a range of values that may need to be taken into account when considering the desirable or possible outcomes of engaging artists in the public realm.” It is meant to be filled out by various stakeholders during a project-planning stage, as well as at the midpoint and conclusion of a project.

Ixia’s “personal project analysis”is “a tool for process delivery that aims to assess how a project’s delivery is being put into practice.” I will not analyze it in detail here, except to say that something similar should also ideally be part of any organization’s evaluation plan, as it allows for assessing how well the project is being carried out.

Personal Project Analysis from Ixia’s “Public Art: A Guide to Evaluation”

Matrix from Ixia’s “Public Art: A Guide to Evaluation”

Ixia’s matrix identifies four main categories of values:

  1. Artistic Values [visual/aesthetic enjoyment, design quality, social activation, innovation/risk, host participation, challenge/critical debate]
  1. Social Values [community development, poverty and social inclusion, health and well being, crime and safety, interpersonal development, travel/access, and skills acquisition]
  1. Environmental Values [vegetation and wildlife, physical environment improvement, conservation, pollution and waste management-air, water and ground quality, and climate change and energy],
  1. Economic Values [marketing/place identity, regeneration, tourism, economic investment and output, resource use and recycling, education, employment, project management/sustainability, and value for money].

The matrix accounts for the fact that each public artwork’s values and desired outcomes will be different depending on the nature of the presenting organization, site, and audience.

It is unclear how widely these tools have been adopted in the UK since their publication, and I did not encounter anyone in the U.S. using them. Yet many organizations are employing a similar process of engaging various stakeholders during the project-planning phase to determine goals specific to each project, which relate to the categories in Ixia’s matrix. For example, most professionals I interviewed cited some type of “artistic” goals for the work. Some organizations prioritize presenting the highest quality art in public spaces, in which case the realization of an artist’s vision is top priority (representatives of New York City’s Percent for Art program described “Skilled craftsmanship” and “clarity of artistic vision” as key success factors, for example).

By contrast, organizations that include a youth education or community justice component may rank “social” or “economic” values higher. Groundswell Community Mural Project, an NYC-based nonprofit that creates mural projects with youth, asks all organizations that host mural projects (which may include schools, government agencies, and community-based organizations) in pre-surveys to choose their top desired project outcomes from a range of choices, as well as identify project-specific issues. Groundswell does have a well-developed theory of change behind all its projects, relating to the organization’s core mission to “beautify neighborhoods, engage youth in societal and personal transformation, and give expression to ideas and perspectives that are underrepresented in the public dialog.” However, some project-specific outcomes may be more environmental—for example, partnerships with the Trust for Public Land to integrate murals into new school playgrounds–while some relate to “crime and safety,” as in an ongoing partnership with the NYC Department of Transportation to install murals and signs at dangerous traffic intersections that educate the public about traffic safety.


Groundswell Community Mural Project, signs from “Traffic Safety Program,” a partnership between Groundswell, the Department of Transportation’s Safety Education program, and several NYC public elemenary schools. Lead artists Yana Dimitrova, Chris Soria, and Nicole Schulman worked with students to create these signs installed at locations identified as most in need of traffic signage.

Groundswell is just one example of many public art organizations that set goals at the outset of each individual project, based on each project’s particular site and community. While individual organizations may effectively evaluate their own projects this way, crafting a common theory of change for all public art may be an unrealistic expectation.

The challenge of reliable indicators and data collection

The Ixia report discusses the process by which indicators of public art’s ability to produce desired outcomes may be identified, with the following questions:

  1. Is it realistic to expect a public art project to influence the outcomes you are measuring?
  2. Is it likely that you can differentiate the impact of the public art project and processes from other influences, e.g., other local investment?
  3. Is it possible to conduct meaningful data on what matters in relation to the chosen indicators?

For example, in studies seeking to measure any kind of change, good data collection should always include a baseline—i.e., economic conditions or attitudes of people BEFORE the public art entered the picture. Data collection methods ideally should also be reliable, unbiased, and easily replicated.

The “Guide to Evaluation” does not go into detail about any concrete indicators of public art’s “impact.” Therefore, the matrix seems to be most useful as a guide to goal-setting. As the Americans for the Arts summary of this report points out, “Ixia directs users to [UK-based] government performance indicators as a baseline source, but that is where the discussion ends.”

Liesel Fenner of Americans for the Arts’s Public Art Network mentioned in an email to me that while PAN hopes to develop a comprehensive list of indicators in the future, which can be shared among public art presenters nationally, “developing quantitative indicators is the main obstacle.”

According to my interviews with both on-the-ground administrators and public art researchers, many busy arts administrators find the type of data collection recommended in Ixia’s guide difficult, costly and time-consuming. It can be a challenge to get artistic staff to buy into even basic evaluation; says one community arts administrator, “artists are paid for a their leadership in developing and delivering a strong project. Many artists don’t see as much value in evaluation because, in part, it comes in addition to the difficult work that they just accomplished.” It is also uncommon to spend precious training resources on something like quantitative evaluation techniques.

Some are of the opinion that even if significant time were spent on justifying public art’s existence by “proving” its practical usefulness, this would still be a losing battle that could lead to the withdrawal of support for public art, the production of bad art that panders merely to public needs, or both. One seasoned public art administrator asked me: “Is architecture evaluated this way? The same way public buildings need to exist, public art needs to exist. It’s people looking to weaken public art who are trying to ask these questions about its impact.”

The challenge of evaluating long-term, permanent installations

Glenn Weiss, former director of the Times Square Alliance Public Art Program and current director of Arts League Houston, posits that economic impact studies are “most possible with highly publicized, short-term projects like the Gates or large public art festivals.” Indeed, the New York City Mayor’s office published a detailed report on “an estimated $254 million in economic activity” that resulted from The Gates, a large installation in Central Park by internationally acclaimed artists Christo and Jeanne-Claude, based on data like increased park attendance and business at nearby hotels, restaurants, etc. However, most public art projects, even temporary ones, are not as monumental or heavily promoted as The Gates, making it difficult to prove that people come to a neighborhood, or frequent its businesses, primarily to see the public art.

Visitors crowd Christo and Jeanne-Claude’s “The Gates” (2005) in Central Park. Photo by Eric Carvin.

Weiss also believes that temporary festivals are generally easier to evaluate quantitatively than long-term public art projects. For example, during a finite event or installation, staff members can keep a count of attendees (some of the temporary public art projects I have encountered in my research, such as the FIGMENT annual participatory art festival on Governors Island and in various other U.S. cities, use attendance counts as a measure).

The few comprehensive studies connecting long-term, permanent public art to economic and community-wide impacts, conducted by research consultants and funded by specific grants, have led to somewhat inconclusive results. For example, An Assessment of Community Impact of the Philadelphia Department of Recreation Mural Arts Program (2002), led by Mark J. Stern and Susan C. Seifert of University of Pennsylvania’s Social Impact of the Arts Project (SIAP), cites the assumed community-wide benefits of murals outlined in MAP’s mission statement at the time of the study:

The creation of a mural can have social benefits for entire communities…Murals bring neighbors together in new ways and often galvanize them to undertake other community improvements, such as neighborhood clean-ups, community gardening, or organizing a town watch. Murals become focal points and symbols of community pride and inspiring reminders of the cooperation and dedication that made their creation possible.

Yet when asked to “use the best data available to document the impact that murals have had over the past decade on Philadelphia’s communities,” Stern and Seifert found that

this is a much more difficult task than one might imagine. First, there are significant conceptual problems involved in thinking through exactly how murals might have an impact on neighborhoods. Second, the quality of data available to test hypotheses concerning murals is limited. Finally, there are a number of methodological problems involved in using the right comparisons in assessing the potential impact of murals. For example, how far from a mural might we expect to see an impact? How long after a mural is painted might it take to see an effect and how long might that effect last?…Ultimately, this report concludes that these issues remain a significant impediment to understanding the role of murals.

By comparing data on murals to existing neighborhood quality of life data, Stern and Seifert considered murals’ connection to factors like community economic investment and indicators of more general neighborhood change (such as reduced litter or crime, or residents’ investment in other community organizing activities). The study also measured levels of community investment and involvement in murals. However, the scarce data available on these factors, according to the authors, are difficult to connect directly to public art in a cause and effect relationship. Stern and Seifert’s strongest finding was that murals may build “social capital,” or “networks of relationships” that can promote “individual and group well-being,” because of all the events surrounding mural production in which people can participate. It was more difficult to show a consistent relationship between murals and other theorized outcomes, such as ability to “inspire” passersby or serve as “amenities” for neighborhoods. The study recommends that “more systematic information on their physical characteristics and sites—‘before and after’—would provide a basis for identifying murals that become an amenity.”

A more recent 2009 report on Philadelphia’s commercial corridors by Econoconsult also demonstrated “some indication of a positive correlation” between the presence of murals and shopping corridor success. Murals are described here as “effective and cost efficient ways of replacing eyesores with symbols of care.” However, the report also adds the disclaimer that a positive correlation is not necessarily proof of the murals’ role as the primary cause of a neighborhood’s appeal.

So what can we assess most easily, and how?

My research revealed that quantitative data on short-term inputs and outputs of public art programs is frequently cited (sometimes inappropriately) as evidence of a program’s success in things like reports or funding proposals—for example, number of new projects completed in one year, number of youth or community partners served, or number of mural tour participants. However, in this article I am not really focusing on this type of reporting, as it does not address how public art impacts communities over time.

The good news is that there are several examples of indicators that are more easily measurable in certain types of public art situations, including permanent installations. These include:

  • Testimonies on the educational and social impact of collaborative public art projects, from youth and community participants and artists alike
  • Qualitative audience responses to public art, including whether or not the art provokes any type of discussion, debate, or controversy
  • How a public artwork is treated over time by a community, including whether it gets vandalized, and whether the community takes the initiative to repair or maintain it
  • Press coverage
  • The “use” of a public artwork by its hosts, e.g. in educational programs or marketing campaigns
  • Levels of audience engagement with public art via internet sites and other types of educational programming

Below I will summarize some helpful methods by which data is collected around all these indicators.

Mining the Press

Archiving press coverage of public art projects online is a common practice among organizations, as is presenting pithy press clippings and quotes in funding proposals and marketing materials as a means of demonstrating a project’s success. For researchers, studying articles (and increasingly, blog posts) on past projects can also provide rich documentation of artworks’ immediate effects, as well as points of comparisons. For example, the “comments” sections of online articles and blogs can generate interesting, often unsolicited feedback, albeit from a nonrandom sample.

One possible outcome of public art projects is controversy, which is not always considered a bad thing, despite now-infamous examples of projects like Richard Serra’s Tilted Arc being removed. For example, Sofia Maldonado’s 42nd Street Mural, presented in March 2010 by the Times Square Alliance, provoked extensive coverage on news programs and blogs. The mural’s un-idealized images of Latin American and Caribbean women based on the artist’s own heritage led some women’s and cultural advocacy organizations to call for its removal. The Alliance opted to leave the mural up, and has cited this project as evidence of the Alliance’s commitment to artists’ freedom of expression. The debates led Maldonado to reflect, “as an art piece it has accomplished its purpose: to establish a dialogue among its spectators.”

Sofia Maldonado, “42nd Street Mural,” 2010, Commissioned by the Times Square Alliance Public Art Program.

Site visits and “public art watch”

As an attempt to promote more sustained observation of completed works over time, public art historian Harriet Senie assigns her students in college and graduate level courses a final term paper project every semester that contains a

“public art watch”…For the duration of a semester, on different days of the week, at different times, students observe, eavesdrop, and engage the audience for a specific work of public art. Based on a questionnaire developed in class and modified for individual circumstances, they inquire about personal reactions to this work and to public art in general” (quoted in Sculpture Magazine).

Senie’s students also observe things like people’s interactions with an artwork, such as how often they stop and look up at it, take pictures in front of it, or use it as a meeting place.

Senie maintains that “Although far from ‘scientific,’ the information is based on direct observation over time—precisely what is in short supply for reviewers working on a deadline.” This approach towards challenging college students to think critically about public art has also been implemented in public art courses at NYU and Pratt Institute, and the aggregate results of student research over time are summarized in one of Senie’s longer publications.

I have not encountered any other organizations able to integrate this type of research into their regular operations; however, there may be opportunities to integrate direct observation into routine site visits to completed permanent public artworks.

In the NYC Percent for Art program, and its Public Art for Public Schools (PAPS) wing that commissions permanent art for new and renovated school buildings, staff members are expected to undertake periodic visits “to monitor the condition of artworks that have been commissioned,” according to PAPS director Tania Duvergne. Such “maintenance checks” can provide opportunities to survey building inhabitants or local residents about their opinions and use of the artworks.

Duvergne uses these “condition report” visits as opportunities to further her agency’s mission to “bridge connections between what teachers are already doing in their classrooms and their physical environments.” At each site, she tries to interview custodians, teachers, principals and students about whether the art is well treated, whether they know anything about the artwork (and are using the online resources available to them), and whether they want more information. Duvergne notes that many teachers use the public art in their teaching in some way, even if they do not know a lot about the artwork. While observing a public artwork during a site visit every few years is nowhere near as extensive and sustained observation as Senie’s class assignment, perhaps a similar survey and observation could be undertaken with a wide range of students and staff members over the course of a day.

Project participant and resident surveys

Organizations that create community-based public art usually have specific desired social, educational, or behavioral outcomes in project participants. Mural organizations Groundswell and Chicago Public Art Group describe thorough evaluation processes in which mural artists, youth, community partners and parents are all surveyed and sometimes interviewed before, during and after projects. Groundswell’s community partner post-project survey, for example, asks partners to rank their level of agreement about whether certain community-wide outcomes have been met, such as whether the mural increases the organization’s visibility, increases awareness of an identified issue, and improves community attitudes towards young people.

Groundswell’s theory of change (most recently honed in 2010 through focus groups with youth participants and community partners) articulates various clear desired outputs and outcomes for both youth and community partner organizations. This includes the development of “twenty-first century” life skills in teen mural participants. To measure this impact specifically, Groundswell has made it a priority to continue to track youth participants after they graduate, turn 21, and reach other checkpoints, according to Executive Director Amy Sananman. Groundswell recently hired an outside researcher to build a comprehensive database (using the free program SalesForce), in which participant data and survey results, and data on completed murals (such as whether any were graffitied, how many times they appeared in news articles, etc.) can be entered and compared to generate reports.

In 2006, Philadelphia’s Mural Arts Program conducted a community impact study using audience response questionnaires as a starting point. Then- special projects manager Lindsey Rosenberg employed college students, through partnerships with local universities, to conduct door-to-door surveys of all residents living within a mile radius of four murals. The murals differed by theme, neighborhood, and level of community involvement. The interns orally administered a multiple-choice questionnaire with questions ranging from general opinions of the murals to level of participation in making the murals to perceptions of changes in the neighborhood as a result of the murals. They then inputted the surveys into a computer database specifically created for this study by outside consultants. The database not only calculated percentages of each response to murals, but tracked correlations between these responses and census demographic data, including income level and home ownership.

This research project was different from prior MAP community impact studies in that it assumed that “what people perceive to be the impact of a mural is in itself valuable,” as much as external evidence of change.

In 2007, MAP shared some preliminary results of this endeavor with me to aid my thesis research. At the time the research seemed to generate some useful data on which murals were appreciated most in which neighborhoods, and the correlation between appreciation and community participation in the projects. However, since then I have not been able to gather any further information on this study, or find any published results. I did hear from MAP at the time of the study that only 25% of people who were approached actually took the surveys, indicating just one problematic aspect of conducting such research on a regular basis. The database was also costly.

Most recently, MAP is partnering (page 160) with the Philadelphia Department of Behavioral Health & Mental Retardation Services (DBH/MRS), community psychologists from Yale, and almost a dozen local community agencies and funders with core support from the Robert Wood Johnson Foundation, on “a multi-level, mixed methods comparative outcome trial known as the Porch Light Initiative. The Porch Light Initiative examines the impact of mural making as public art on individual and community recovery, healing, and transformation and utilizes a community-based participatory research (CBPR) framework.” Unfortunately, MAP declined my requests for more information on this new study.

Interviewing youth and community members can of course only generate observations and opinions, but Groundswell at least is also taking the step of also tracking what happens to participants after they complete a mural project. I am still not clear how to prove that any impacts on participants are a direct result of public art projects. Yet surveying project participants and community members about their feelings about a program or project, and how they think they were impacted by it, is one of the most do-able types of research (apart from the challenges of getting people to fill out surveys).

Community-based “proxies”

Groundswell director Amy Sananman has described some success in utilizing community partners as “proxies” for reporting on a mural’s local impact, effectively outsourcing some of the burden of data collection to other organizations. For example, the director of a nonprofit whose storefront has a Groundswell mural could report back to Groundswell on the extent to which local residents take care of the mural, how often people comment on it, etc.

PAPS, CPAG, and ArtBridge, an organization that commissions artwork for vinyl construction barrier banners, have described similar ideas for partnerships. ArtBridge hopes to implement a more formal process in which the owners of stores where its banners are installed can document changes like increased business due to public art. PAPS director Tania Duvergne also cites examples of “successful projects” in which public schools, on their own, designed art gallery displays or teaching curricula around their public art pieces, and shared this with PAPS on site visits.

There might be a danger in depending on community partner organization representatives to speak for the whole “community” or to provide reliable, accurate data. But if cooperative partners can be identified and regular reporting scheduled using consistent measurement tools, the burden of reporting on specific neighborhoods is lessened for the public art organization.

“Smart” Technology

Groundswell, ArtBridge, and MAP are all starting to utilize the new QR code smartphone application, which uses QR codes to direct public art site visitors to websites with more information about the art. Groundswell experimented this past summer with adding QR codes to a series of posters designed by its Voices Her’d Visionaries program to be hung in public schools to educate teens about healthy relationships. Groundswell can then track how many hits the website gets through the QR app. In general, web activity on public art sites is an easy quantitative measure of public interest.

Philadelphia’s Mural Arts Program has a “report damage” section on its website, where anyone who notices a mural in need of repair can alert MAP online. This is also a potential source for quantitative evidence of how many people notice and feel invested in murals.

Use of Interpretive Programming

Public art organizations are increasingly designing interpretive programming around completed artwork, from outdoor guided tours to curated “virtual” artwork displays. NYC’s Metropolitan Transit Authority’s Arts for Transit program provides downloadable podcasts about completed artworks on its website; other organizations include phone numbers to call for guided tours at public art sites themselves (as in many museum exhibits). Both in-person and virtual/phone tours can provide rich opportunities to track usage, collect informal feedback from participants, and solicit feedback via surveys. ArtBridge recently initiated its WALK program giving tours of its outdoor banner installations. After each tour, ArtBridge emails a link to a brief questionnaire to all tour participants, and offers a prize as an incentive for taking the survey.

A Philadelphia Mural Arts Program guided tour.

Concluding remarks: What next for evaluation?

While systematic, reliable quantitative analysis of public art’s impact at the neighborhood level remains challenging and undervalued in the field, new technologies as well as effective partnerships are making it increasingly feasible for public art organizations to assess factors such as audience engagement, benefits to participants, and community stewardship of completed public art works. The Ixia “Guide to Evaluation” offers a useful roadmap for approaching the evaluation of any type of public art project. At the same time, we should not forget the ability of art to affect people in ways that may seem intangible or even immeasurable, or, as Glenn Weiss puts it, “become part of a memory of a community, part of how a community sees itself.”

(Enjoyed this post? We’re raising funds through this Thursday to make the next generation of Createquity possible. We’re getting close, but need your help to cross the finish line. Please consider a tax-deductible donation today!)

Leave a comment

[Createquity Reruns] On Stories vs. Data

(Createquity’s summer rerun programming continues this week with a focus on arts research! Over the next few months, we’re reaching into the archives to pull out some of the best articles and most underrated gems we’ve published since 2007. This post was originally written by me for the Fractured Atlas blog in March 2011, and argues that data and stories are much more closely intertwined than the way we talk about them would suggest. -IDM)

Many of us, especially if we’ve been present at a Rocco Landesman speech in the past year or so, are probably familiar with the quote widely attributed to W. Edwards Deming: “In God we trust; all others must bring data.” And if you’ve filled out a final report for any grant recently, you’ve probably come face to face with philanthropy’s insatiable hunger for numbers. Attendance figures, financial data, surveys—all of these and more are increasingly becoming an immovable fixture of life in the arts, often to the chagrin of fundraisers and arts administrators.

Much of the recent drive toward measurement in the nonprofit sector is being driven by a new generation of philanthropists, many coming from metrics-obsessed corporate America, who see in numbers the promise of being able to evaluate the effectiveness of their giving with the same facility as evaluating their investments in the stock market. The leaders of this so-called “smart giving” movement carry a strong distrust of anecdotal evidence (GiveWell is pretty much exhibit A for this), and privilege “hard,” rigorously collected data instead. Conveniently, they also typically focus the bulk of their attention and resources on cause areas such as education, poverty, and global health, where data is in much more ready supply.

Caught in the middle of this trend, artists frequently express discomfort with perceived attempts to translate their work into a statistic. For a field that prides itself on expressing the inexpressible, the notion of reducing a potentially life-changing experience to a number doesn’t just feel confusing, it’s kind of insulting. What’s more, fundraisers who work with individual donors often find that, by contrast, a powerful story can do wonders where facts and figures fall flat. (The same could be said for advocates and politicians.)

It’s easy to see why artists and administrators might prefer stories to data. A story is rich, full of detail and shape. Data is flat. Put another way, data is mined from the common ground between various stories, which means that in order for it to work, for it to be converted into the language of numbers, you have to exclude extraneous information. Even if that “extraneous” information happens to be really interesting and cool and sums up exactly why we do what we do!

The reason stories work for us as human beings is because they are few in number. We can spend two hours watching a documentary, or a week reading a history book, and get a really deep qualitative understanding of what was going on in a specific situation or in a specific case. The problem is that we can only truly comprehend so many stories at once. We don’t have the mental bandwidth to process the experiences of even hundreds, much less thousands or millions of subjects or occurrences. To make sense of those kinds of numbers, we need ways of simplifying and reducing the amount of information we store in each case. So what we do is we take all of those stories and we flatten them: we dry out all of the rich shape and detail that makes up their original form and we package them instead in a kind of mold: collecting a specific and limited set of attributes about each so that we can apply analysis techniques to them in batch. In a very real sense, data = mass-produced stories.

It sounds horrible when I put it like that, right? But it’s an essential process because without it, we can’t be assured that we’re looking at the whole picture. Especially when we’re dealing with a large number of potential cases or examples, if we just concentrate on those that are nearest to us, whether that proximity is measured by geography or social/professional circle or similarity to our own situation, there is a very real risk that we will draw inappropriate conclusions about examples that are a little farther afield. Either random statistical noise (especially in the case of small sample sizes) or a bias that skews the kinds of examples we seek out can contribute to this lack of precision about our conclusions.

So we gain something very significant when we flatten stories into data. At a minimum, if we’re doing it right, we gain the confidence that comes with looking at the whole picture rather than only a piece of it. At its very best, we gain the opportunity to formulate stories out of data – such as in the case of Steve Sheppard’s work on MASS MoCA and the revitalization of North Adams, MA. But we lose something too. We lose the ability to cross-reference obscure details about one of our examples with obscure details about another, and sometimes those obscure details turn out to be pretty important. We lose some of the context for understanding why data points might look the way they do, and depending on how well we’ve constructed our data, that may or may not change the conclusions we draw.

But make no mistake: stories are never incompatible with data. When you or someone you know has an incredible experience at an arts event, or when a troubled child’s life is saved through involvement with the arts, or when people are brought together who wouldn’t otherwise meet because of the arts, those are all great stories – and they’re also data. One could imagine counting the number of lives saved by the arts, scoring the quality of arts events, cataloguing the new connections and friendships made possible through arts activities. I’m not saying it’s easy to do such things, but that doesn’t mean they can’t be done meaningfully and with integrity. I think we need to challenge ourselves as a field to be more creative about how we articulate and measure the ways in which the arts improve lives. The answers that we’re looking for might be closer within our reach than we thought.

(Enjoyed this post? We’re raising funds through this Thursday to make the next generation of Createquity possible. We’re getting close, but need your help to cross the finish line. Please consider a tax-deductible donation today!)

Leave a comment

[Createquity Reruns] Our View of Creative Placemaking, Two Years In

(The two articles reposted earlier this week caused quite a stir when they were published, and it’s fair to say that they helped shape the public conversation around creative placemaking. That stir culminated in a direct response from two officials at the National Endowment for the Arts, Director of the Office of Research and Analysis Sunil Iyengar and Director of Design Jason Schupbach, that was published right here on Createquity in November 2012 and is reprinted below. ArtPlace’s Carol Coletta and Joe Cortright also wrote two responses to “Creative Placemaking Has an Outcomes Problem” and Ann Markusen’s piece. For the most up-to-date thinking on these topics on the part of ArtPlace America (as it’s now known) and the NEA, check out these links respectively. – IDM)

"The Bridge" by artist Elena Colombo, image courtesy of ArtsQuest.

“The Bridge” by artist Elena Colombo, image courtesy of ArtsQuest (MICD25 grantee)

We continue to be grateful for the level of national discourse that has emerged since the National Endowment for the Arts’ introduction of Our Town, the federal government’s signature investment in creative placemaking. In particular, Createquity has published a number of blog posts that have provided us with valuable feedback. They have also raised insightful questions about the program resources and research needs for an initiative of this size and scale.

So much has been happening at the NEA – and some of the most vibrant conversations have been based in part on incomplete or out-of-date information – that we thought it made sense to run through our accomplishments and goals, now that we are in the second year of Our Town grantmaking. (If, after reading this post, you want to know even more about what is happening across the country, please take a look at the current issue of NEA Arts: “Arts and Culture at the Core: A Look at Creative Placemaking.”)



When Rocco Landesman arrived at the NEA in 2009, he put a name on something he saw happening all across this country, from the Little Haiti neighborhood in Miami, Florida, to the cultural district that sprang up around the Museum of Glass in Tacoma, Washington: cities and towns were using the arts to help shape their social, physical, and economic characters. We were rich in anecdotes, but individual communities and organizations lacked the opportunity to connect with others doing similar work.

Like any good producer, Rocco realized that we could not create a community of practice without a name for our shared endeavor, and so the phrase “creative placemaking” was introduced into our national lexicon. Two efforts quickly followed: a white paper by Ann Markusen and Anne Gadwa Nicodemus that defined this sector of work, and a national convening of 40 experts in the arts, community development, and research. This diverse group launched a conversation about how to measure the presence and impact of the arts in U.S. communities.


The grant program

Both of these efforts helped inform the design of Our Town, which makes grants to partnerships among arts and design organizations and local governments to increase community livability through the arts. By framing the conversation around how communities can use the arts to contribute positively to shared priorities, rather than adopting the more traditional approach of simply stating what the arts organizations would like to do and asking for support to do it, Our Town projects have attracted an impressively diverse range of partners. These have included social service agencies, botanic gardens, schools, religious institutions, scientific organizations, local businesses, and business improvement districts.

Through two rounds, we have now invested more than $11.5 million in Our Town grants to 131 communities in all 50 states and the District of Columbia. Along the way, we learned an important lesson: creative placemaking is a big and inclusive tent, and in order to make sense of this emerging sector, we need to look at the specific sub-communities it contains. As grant administrators, we find that it helps to consider Our Town projects in terms of these sub-communities at different points of the award cycle.

From a grant-making point of view, for example, we sort applications into three subsets for review: arts engagement projects, cultural planning and design projects, and projects in non-metro and tribal communities. This is far from a mutually exclusive / completely exhaustive taxonomy, but for our review panels, this division allows Our Town grant applications to be examined in clusters that share similar opportunities, challenges, and access to resources.

Once the grants have been made, and we move into the mode of grants stewardship, it has made more sense for NEA staff to look at the projects based on the specific activities being undertaken. This list is bound to change or grow, but to date, Our Town grants tend to fall into these distinct categories: creating and strengthening artists’ work spaces; asset mapping/cultural district planning; creative industries and entrepreneurship; creating and strengthening cultural facilities; investing in festivals, performances, and other innovative arts programming; reinventing public spaces through creative uses; and the planning and implementation of temporary and permanent public art.

To varying degrees, this taxonomy has subsequently guided our work in evaluating grant projects, conducting national-level research, and creating communities of practice. Let’s take each of these in turn.


Grant evaluation

At the NEA, every grant program must help achieve one of five outcomes: creation, engagement, learning, research, or livability. The Our Town grants are all measured against livability, and grantees report to us through a final descriptive report form specific to this outcome.

Unlike private endowment-driven funders, the NEA’s budget is allocated annually by Congress. Despite our name, the NEA does not, in fact, have an endowment, and we are mandated to make our grant decisions anew each year. These facts mean that the NEA cannot commit to funding specific projects over long periods of time, as is the practice with many foundations. (Organizations may, of course, re-apply to the agency.) The ability to make a multi-year commitment to a grantee is the moral prerequisite for doing a multi-year evaluation of that project. So we look at each grantee’s project on its own terms and measure it against its contributions to community livability.

These final descriptive reports allow the NEA to make an evaluation of each grant, but they are also a foundational element in fulfilling our other responsibilities, including both our national research into creative placemaking and our work to build communities of practice.


National research

Following publication of the Creative Placemaking white paper, several organizations and individuals approached us, requesting cost-effective solutions for better understanding and communicating the value their work added to their communities. Almost all of these groups were more than adept at documenting their work with images, video, and anecdote, but they lacked easy access to quantitative information.

We felt that we could play a key role in building an infrastructure to address this need. In order to better articulate the concept of livability that underpins the Our Town program, we posited a hypothesis that almost any successful creative placemaking project would make a difference to its community in at least one of four ways: strengthening the infrastructure that supports artists and arts organizations; increasing community attachment; improving quality of life; and/or driving local economies.

These particular dimensions of livability emerged from a review of extant literature, consultations with the field, and an initial review of grant applications. It also grew apparent that these outcomes would be profoundly difficult to measure. So we decided that an appropriate next step would be to develop a framework of arts and livability indicators that would help the field think constructively about how these concepts might be reflected in data already being collected. The indicators are not intended to measure exactly what is happening in creative placemaking projects; they are instead – as the name implies – meant to indicate conditions on the ground that reflect important dimensions of livability and provide insights into relationships that might exist, thus highlighting areas for further research.

By tracking outcomes that are already publicly reported and widely available, we should be able to provide a reasonably reliable indicator of changes to a community’s overall livability. Are all or even some of these changes necessarily due to the presence of creative placemaking activities? Absolutely not–but at least they are the kinds of community-wide outcomes that should matter most to people and groups engaged in creative placemaking. By allowing such outcomes to be tracked easily, the indicators system will bypass the need for elaborate and expensive data collection tools and analytics on a project-by-project basis.

How will we know that the indicators we choose are the right ones? Because they will be based on a series of hypotheses soon to be tested in real communities. For instance, we hypothesize that one indicator of a strengthened arts infrastructure might be an increase in the number of employees at arts organizations. An indicator of community attachment might be the average length that a citizen has lived in a community. An indicator of quality of life might be lower crime rates. And an indicator of a strong local economy might be the number of valid business addresses in a community. Each of these example indicators is based on information already collected and made available by the Census Bureau, the FBI, and the U.S. Postal Service.

We need to test each hypothesis in multiple communities because a single indicator may not work the same way in every place. For instance, at the NEA, we have spent a lot of time internally debating whether “length of commute” is a potential indicator for increased quality of life. Not surprisingly, those of us who live in urban centers think shorter commute times equate with a higher quality of life; and those of us who live in the suburbs and have chosen homes specifically further way from work, feel that longer commute times better correlate with a higher quality of life.

We are working with a team of researchers from the Urban Institute to explore these kinds of nuances for every indicator, testing and validating each hypothesis in multiple use cases and documenting the ways in which a single indicator is and is not an effective proxy. We are also working with and learning from other federal agencies that are similarly building indicator systems from nationally available data sets. It is possible to use an indicators system very effectively, indeed, but it is also all too easy to misuse one – and we want to do everything we can to avoid such pitfalls.

Our team will also assess whether the appropriate data can be accessed at the geographical level of detail we require. Recently, Ann Markusen shared a summary from Arizona State University Professor Emily Talen that was circulated on a listserv for urban planning researchers. This sort of granular investigation into the data available from, in this case, the American Community Survey is exactly the next order of business for our indicators team. So we are, yet again, indebted to Ann.

If we are successful in creating this indicators framework, then the nation’s arts organizations will have free and easy access to a system that helps them begin to visualize and report on some of the things happening alongside their creative placemaking projects. From a social science perspective, will these metrics prove a causal relationship? Again, absolutely not. But for citizens, funders, civic officials, and business leaders, they will provide a good indication of what is happening. And when viewed alongside qualitative data from the projects themselves, the indicators may provide sufficient evidence to satisfy stakeholders who seek assurance of the projects’ overall value. Others may wish to know more, and if so, the indicators and the qualitative data lay the foundation for further research and project-specific evaluation.

We believe this approach will help demystify data for organizations involved in creative placemaking. An organization might be brilliant at developing an outdoor festival that would literally bring art into the center of the public square. It might also excel at documenting the resulting changes it can observe in the surrounding neighborhood. But it may not be skilled at identifying and analyzing data sets, and it may not have the time or the funds to undertake an expensive and exhaustive research project. These organizations are exactly the target audience for our framework, since we will publish – in plain language ­– the data sets that pass our national validation tests and explain how to extricate only the data that are relevant to, in this case, innovative arts programming.

Photograph by Robert Allen, copyright Trey McIntyre Project, all rights reserved.

Photograph by Robert Allen, copyright Trey McIntyre Project (Our Town grantee).


Communities of practice

Taken together, these quantitative and qualitative impacts will allow the NEA to help connect and support communities of practice in creative placemaking.

We have issued an RFP for help in producing documentation that looks at each of the Our Town grantees and asks: what did you set out to do with your project; how did you go about doing your project; how do you know whether you succeeded; and what would you do differently having been through what you went through?

The field has been clamoring for “how-to” information. By combining the final descriptive reports from the Our Town grants, the indicators framework, and the in-depth documentation of each project, we will be able to play matchmaker.

A community that wants examples of successful, federally funded projects can comb through our analysis of the NEA’s final descriptive reports to learn which other communities have succeeded.

A community that would like to make a major investment in public art will be able to parse the in-depth descriptions of public art projects to see what lessons they can learn.

Even prior to all of those resources being available, we have started trying to create cross-community connections by having Our Town panelists share their insights and experiences in a series of archived webinars. We will do even more of these in the coming months, featuring grantees.


Moving forward

We are really only two years into this work, and are proud of all that we have been able to accomplish. But we are also humbled by the work ahead. The good news is that there continues to be national energy and excitement around creative placemaking, and we are eager for any or all feedback.

We hope that there will continue to be a robust conversation in blogs, on listserves, and throughout the Twittersphere. And we also hope that people will continue to feel free to interact directly with the agency. We are always eager to hear from you at schupbachj@arts.gov or iyengars@arts.gov.

(Enjoyed this post? We’re raising funds through July 10 to make the next generation of Createquity possible. We are 56% of the way there, but need your help to cross the finish line. Please consider a tax-deductible donation today!)

Leave a comment

Diane Ragsdale’s Wonderful Words and Nina Simon’s Participatory Museum

We’re feeling the love from two new arts luminaries today: Diane Ragsdale, provocateur and fellow blogger at Jumper, and Nina Simon, Executive Director at the Santa Cruz Museum of Art and History, have thrown their support behind our Indiegogo campaign. Diane offers some inspiring words of support in the video below, and Nina has given us a new campaign perk to offer: signed copies of her thought-provoking book, The Participatory Museum. We only have two copies to give away, and Nina doesn’t sign books very often, so we’re asking for $100 donations for these. Get ‘em while you can!

We’re excited to have passed the $5,000 mark in the campaign, putting us more than 50% of the way to our goal. It’s been utterly humbling to see how many of you think this project deserves your hard-earned dollars. To everyone who has donated or shared the campaign with others, we can’t thank you enough. For those of you just tuning in or still on the fence, we hope Diane and Nina can help convince you. If you value Createquity as a resource and feel that the site is worthy of a financial contribution, there’s never going to be a better time to donate.

If you’re reading this via email, you can check out Diane’s video here.

Leave a comment