Thank you!

All I can say is WOW. In the last 40+ hours of our campaign, more than $4,000 came in from 33 donors to put us over the top and then some. In the end, we blew past our goal, raising $11,430 (counting an employer match that was not included in the Indiegogo totals). Thank you to everyone who made this possible!

We’ll be following up with all of our donors individually, but I want to briefly thank a few folks in particular. First, I want to acknowledge our contributors at the Founding Sponsor level, MailChimp and Pamela York Klainer, without both of whom it would have been much harder for this campaign to meet its goal. You’ll be seeing their sponsor credits on our About page when we relaunch. Thomas Cott, Diane Ragsdale, Nina Simon, and Andrew Taylor all enhanced our campaign in special ways by either recording videos of support or donating items for us to offer as perks, and provided invaluable “social proof” for this enterprise. Our entire network of people who have been directly involved with Createquity came through huge, both contributing and spreading the word: 100% of our editorial team donated, along with many of our former Writing Fellows and guest bloggers, and we also received amazing support from coworkers, bosses, and current and former interns. And finally, I want to give a special thank you to my fellow editorial team member Jackie Hasa, who was incredibly helpful and effective in managing the campaign from behind the scenes.

There’s no time to sit back and savor the moment: all this while as the campaign has been going on, we’ve been hard at work planning an editorial retreat that is set to take place this weekend. Stay tuned for more announcements as we start to put these resources into action. We promise it will be money well spent.

Share
Leave a comment

[Createquity Reruns] Solving the Underpants Gnomes Problem: Towards an Evidence-Based Arts Policy

(Arts Research Week at Createquity concludes with this speech/post originally delivered at the University of Chicago’s Cultural Policy Center on November 14, 2012 and published on the blog in February 2013. This diagnosis of how our arts research infrastructure is failing us, a vision for how we could fix it, and why it all matters – a lot – is emblematic of the more advocacy-driven approach we intend to take upon our relaunch in the fall. I’m glad to say that there has been progress on some of these recommendations even in just the past year and a half, in particular the formation of the Cultural Research Network to connect researchers with each other and start the process of field-building. Another reason this talk is significant is that it led to my first connection with current Createquity editorial team member John Carnwath! -IDM)

The actual lecture portion of this talk occupies the first 52 minutes of the video, and the first 27 of those minutes are a recap/synthesis of material that will be familiar to regular readers of this blog (specifically, Creative Placemaking Has an Outcomes Problem and In Defense of Logic Models). Since I didn’t write out the speech in advance, I don’t have a transcript for it. However, below is a reconstruction of the new material from my notes, so you can get a taste for it if you don’t have time to watch the whole thing right now. (You’ll notice I make a number of generalizations in the speech about the ways in which arts practitioners interact with research. These are based on observation and personal experience, and are best understood as my working hypotheses.)

*

[starting at 26:55]

Why is this integration between data and strategy important? Because research is only valuable insofar as it influences decisions. This is why logic models are awesome – they are a visual depiction of strategy. And there is no such thing as strategy without cause and effect. Think about that for a second. Our lives can be understood as a set of circumstances and decisions. We make decisions to try to improve our circumstances, and sometimes the circumstances of those around us. Every decision you make is based on a prediction, whether explicitly articulated or not, about the results of that decision. Every decision, therefore, carries with it some degree of uncertainty. This uncertainty can be expressed another way: as an assumption about the way the world works and the context in which your decision is being made. These assumptions are distinguished from known facts.

If you can reduce the uncertainty associated with your assumptions, the chances that you will make the right decision will increase. So, how do you reduce that uncertainty? Through research, of course! Studying what has happened in the past can inform what is likely to happen in the future. Studying what has happened in other contexts can inform what is likely to happen in your context. And studying what is happening now can tell you whether your assumptions seem spot on or off by a mile. Alas, research and practice in our field are frequently disconnected in problematic ways. Six issues are preventing us from reaching our potential.

Issue #1: Capacity

Supply and demand apply as much to research as it does to artists. There are far more studies out there than a normal arts professional can possibly fully process. I wish I could tell you how many research reports are published in the arts each year, but nobody knows! To establish a lower bound, I went back over last year’s [2011] “around the horn” posts, which report new research studies that I hear about. I counted at least 41 relevant arts-research-related publications – a tiny fraction, I’m sure, of total output. To make matters worse, research reports are long, and arts professionals are busy. For the Createquity Writing Fellowship program, participants are required to analyze a work of arts research for the Createquity Arts Policy Library. I collect data on how long it takes to do this, and consistently, it requires 30-80 hours to research, analyze and write just one piece! Multiply this by the number of new studies each year, and you can start to see the magnitude of the problem.

Issue #2: Dissemination

Which research reports is an arts practitioner likely to even know about? Certainly not all of them, because there is almost no meaningful connection between the academic research infrastructure and the professional arts ecosystem. Lots of research relevant to the arts is published in academic journals each year, but unless the faculty member was commissioned to do their work by a foundation, we never hear about it. Academic papers are typically behind a pay firewall, and most arts organizations don’t have journal subscriptions. To give an example, after I wrote about Richard Florida’s Rise of the Creative Class, Florida pointed me to a study in two parts by two Dutch researchers. It’s one of the best resources I’ve come across for creative class theory, but I’ve never heard anyone even mention either study other than him and me.

Issue #3: Interpretation

Research reports inevitably reflect the researcher’s voice and agenda. This is especially true of executive summaries and press releases, which is often all anyone “reads” of research “reports.” Probably the most common agenda, of course, is to convey that the researcher knows what he/she is talking about. Another common agenda is to ensure repeat business from, or at least a continuing relationship with, the client who commissioned the study. The reality, however, is that research varies widely in quality. There’s no certification process; anyone can call themselves a researcher. But even highly respected professionals can make mistakes, pursue questionable methods, or overlook obvious holes in their logic. And, in my experience, the reality of any given research effort is usually nuanced – some aspects of it are much more valuable than others. Unfortunately, many arts professionals lack expertise to properly evaluate research reports, not having had even basic statistics training.

Issue #4: Objectivity

Research is about uncovering the truth, but sometimes people don’t want to know the truth. Advocacy goals often precede research. How many times have you heard somebody say a version of the following: “We need research to back this up”? That statement suggests a kind of research study that we see all too often: one that is conducted to affirm decisions that have already been made. By contrast, when we create a logic model, we start with the end first: we identify what we are trying to achieve and only then determine the activities necessary to achieve it.

Here are a bunch of bad, but common reasons to do a research project:

  • To prove your own value.
  • To increase your organization’s prestige.
  • To advance an ideological agenda.
  • To provide political cover for a decision.

There is only one good reason to do research, and that is to try to find out something you didn’t know before.

Issue #5: Fragmentation

The worst part of the problem I just described is that it drives what research gets done – and what doesn’t get done. There is no common research agenda adopted by the entire field, which is a shame, because collective knowledge is pretty much the definition of a public good: if I increase my own knowledge, it’s very easy for me to increase your knowledge too. The practical consequences of this fragmentation are severe. It results in a concentration of research using readily available data sources (ignoring the fact that the creation of new data sources may be more valuable). It results in a concentration of research in geographies and communities that can afford it, because people don’t often pay for research that’s not about them. And it results in a concentration of research serving narrow interests: discipline-specific, organization-specific, methodology-specific. My biggest pet peeve is that research is almost never intentionally replicated – everybody’s reinventing the wheel, studying the same things over and over again in slightly different ways. A great example of a research study crying out for replication is the Arts Ripple Effect report, which I talked about earlier. The results of that study are now guiding the distribution of millions of dollars in annual arts funding. Are those results universal, or unique to the Greater Cincinnati region? We have no way to know.

Issue #6: Allocating resources

Everyone knows there’s been a trend in recent years towards more and more data collection at the level of the organization or artist. Organizations, especially small ones, complain all the time about being expected to do audience surveys, submit onerous paperwork, and so forth. And you know what, I agree with them! You might be surprised to hear me say that, but when you’re talking about organizations that have small budgets, no expertise to do this kind of work, and the funder who is requesting the information is not providing any assistance to get it…just take a risk! You make a small grant that goes bad, so what? You’re out a few thousand dollars. The sun will rise tomorrow.

As an example of what I’m talking about, I participated in a grant panel recently. I enjoyed the experience, and am glad I did it, but there’s one aspect of the experience that is relevant here. There were seven panelists, and we were all from out of town. Each of us spent, I’d say, roughly 40 hours reviewing applications in advance of the panel itself. Then we all got together for two full days in person to review these grants some more and talk about them and score them. We did this for 64 applications for up to $5,000 each, and in the end, 92% 94% were funded.

So consider this as a research exercise. The decision is who to give grants to, and how much. The data is the grant applications. The researchers are the review panel. What uncertainty is being reduced by this process? How much worse would the outcome have been if we’d just taken all the organizations, put them into Excel, run a random number generator, and distributed the dollars randomly up to $5,000 per organization? And I’m not saying this to make fun of this particular organization or single them out, because honestly it’s not uncommon to take this kind of approach to small-scale grantmaking. And yet if you compare it to ArtPlace’s first round of grants, theoretically they had thousands of projects to choose from, and they gave grants up to $1 million for creative placemaking projects – but there was no [open] review process; they just chose organizations to give grants to. So there’s a bit of a mismatch in the strategies we use to decide how to allocate resources.

There’s a concept called “expected value of information” described in a wonderful book called How to Measure Anything, by Douglas W. Hubbard. It’s a way of taking into account how much information matters to your decision-making process. In the book, Hubbard shares a couple of specific findings from his work as a consultant. He found that most variables have an information value of zero; in other words, we can study them all we want, but whatever the truth is is not going to change what we do, because they don’t matter enough in the grand scheme of things. And he also found that the things that matter the most, the kinds of things that really would change our decisions, often aren’t studied, because they’re perceived as too difficult to measure. So we need to ask ourselves how new information would actually change the decisions we make.

There is so much untapped potential in arts research. But it remains untapped because of all the issues described above. So what can we do about it?

First, we need a major field-building effort for arts research. Connecting researchers with each other through a virtual network/community of practice would help a lot. So would a centralized clearinghouse where all research can live, even if it’s behind a copyright firewall. The good news is that the National Endowment for the Arts has already been making some moves in this direction. The Endowment published a monograph a couple of months ago called “How Art Works,” the major focus of which was a so-called “system map” for the arts. But the document also had a pretty detailed research agenda for the NEA, not for the entire field, that lays out what the NEA’s Office of Research and Analysis is going to do over the next five years, and two of the items mentioned are exactly the two things I just talked about: a virtual research network and a centralized clearinghouse for arts research.

This new field that we’re building should be guided by a national research agenda that is collaboratively generated and directly tied to decisions of consequence. The missing piece from the research agenda in “How Art Works” is the tie to actual decisions. Instead it has categories, like cultural participation, and research projects can be sorted under those buckets. But it’s not enough for research to simply be about something – research should serve some purpose. What do we actually need to know in order to do our jobs better?

We should be asking researchers to spend less time generating new research and more time critically evaluating other people’s research. We need to generate lots more discussion about the research that is already produced. That’s the only way it’s going to enter the public consciousness. Each time we fail to do that, we are missing out on opportunities to increase knowledge. It will also raise our collective standards for research if we are engaging in a healthy debate about it. But realistically, in order for this to happen, field incentives are going to have to change – analyzing existing research will need to be seen as equally prestigious and worthy of funding as creating a new study. Of course, I would prefer if people are not evaluating the work of their direct competitors – but I’ll take what I can get at this point!

Every research effort should take into account the expected value of the information it will produce. Consider the risk involved in various types of grants made. What are you trying to achieve by giving out lots of small grants, if that’s what you’re doing? Maybe measure the effectiveness of the overall strategy instead of the success or failure of each grant. This is getting into hypothesis territory, but based on what I’ve seen so far I would guess that research on grant strategy is woefully underfunded, while research on the effectiveness or potential of specific grants is probably overfunded. We probably worry more than we need to about individual grants, but we don’t worry as much as we should about whether the ways in which we’re making decisions about which grants to support are the right ways to do that.

Finally, we should be open-sourcing research and working as a team. I’m talking about sharing not just finished products and final reports, but plans, data, methodologies as well. I’m talking about seeking multiple uses and potential partners at every point for the work we’re doing. This would make our work more effective by allowing us to leverage each other’s strengths – we’re not all experts at everything, after all! And it would cut down on duplicated effort and free up expensive people’s time to do work that moves the field forward.

I thank everyone for their time, and I’d love to take any questions or comments on these thoughts about the state of our research field.

(Enjoyed this post? Today is the last day of our campaign to make the next generation of Createquity possible. We’re thrilled to have reached our initial goal, but additional contributions are still welcome and will be put to good use in strengthening us for the future. Thank you for your support!)

Share
1 Comment

Last Chance to Take Createquity to the Next Level!

Createquity readers, tomorrow is the final day in our Indiegogo funding campaign. Thanks to your generous contributions, as of this writing we have raised $7,385 from 93 funders toward our $10,000 goal. It’s been truly humbling to witness the number of people who care enough about high-quality information and analysis in the arts to contribute. And with just about 36 hours left in the campaign, it’s time to put the pedal to the metal to bring us over the top. If you believe we as a sector need better, data-driven advocacy, or simply appreciate Createquity as a resource for your work, please donate today!

One of the most gratifying things about this campaign so far has been seeing the wave of support we’ve received from people whose work is central to our field. Barry Hessenius, whose blog is another widely-read resource among arts managers, graced us last week with a completely unsolicited and glowing endorsement of this project:

I hope you will go to the Indiegogo site and support this effort.  I did….I can give you two good reasons why you might part with the cost of a couple of Starbuck’s half caffeine, double mocha, caramel, latte frappacinnos:  First:  Ian and the people he has assembled to help with his newest reinvention of his site are exactly the people we want to support in our field – young, smart, dedicated, committed people who are already making a contribution to the field to help make things better for everyone.  Supporting that alone ought to be worth ten or twenty bucks.  But Second, I can almost guarantee you that if you follow whatever Createquity does over the next year you will read two or more posts that you (you personally) will find of great value to what you are doing on your job.  That ought to be worth a few bucks, no?

And how often do you get to play Santa Claus in July?

Indeed, it’s been amazing to see the movers and shakers who find value in Createquity’s work. Every year, with the help of a pool of nominators, Barry compiles a list of the nonprofit arts sector’s 50 most powerful and influential leaders. More than a fifth of the 2013 list has contributed to our campaign so far. The show of support from our field has been extraordinary, with donations from star consultants like Holly Sidford (Helicon Collaborative), Alan Brown (WolfBrown), Adrian Ellis (AEA), Jerry Yoshitomi (MeaningMatters), Claudia Bach (AdvisArts), and Anne Gadwa Nicodemus (Metris Arts Consulting); arts organization leaders like Adam Huttler (Fractured Atlas), Laura Zucker (LA County Arts Commission), Mara Walker (Americans for the Arts), and Kemi Ilesanmi (The Laundromat Project); current and former foundation leaders like Kerry McCarthy (New York Community Trust), Angelique Power (Joyce Foundation), and Marian Godfrey (ret. Pew Charitable Trusts); and fellow arts thinkers and information mavens Doug McLennan (ArtsJournal), Nina Simon (Museum 2.0), Thomas Cott (You’ve Cott Mail), Andrew Taylor (The Artful Manager), and Diane Ragsdale (Jumper). The latter four have contributed to our campaign in particularly special ways: Thomas, Andrew, and Diane all were kind enough to record video testimonials for us (embedded below), and Nina is donating two rare signed copies of her classic read The Participatory Museum, which are available to donors at the $100 level. Grab ‘em fast!

I hope you agree with us that this is a pretty incredible list. Won’t you add your name to it and help us cross the finish line?

Share
Leave a comment

[Createquity Reruns] Public Art and the Challenge of Evaluation

(Createquity’s summer rerun programming continues this week with a focus on arts research! This instant classic by Createquity Writing Fellow Katherine Gressel spread like wildfire when it was first published in January 2012, and remains our third-most popular post ever. It even brought us a bunch of new readers from Australia! [Long story.] While not a short read, it’s packed with useful information about how practitioners have gone about conceptualizing and evaluating one of the hardest beasts to measure – public art. -IDM)

Steve Powers, “Look Look Look,” Part of the “A Love Letter for You” project, commissioned by the Philadelphia Mural Arts Program, 2009-2010. http://www.aloveletterforyou.com

In the Spring/Summer 2011 issue of Public Art Review, Jack Becker writes, “There is a dearth of research efforts focusing on public art and its impact. The evidence is mostly anecdotal. Some attempts have focused specifically on economic impact, but this doesn’t tell the whole story, or even the most important stories.”

Becker’s statement gets at some of the main challenges in measuring the “impact” of a work of public art—a task which more often than not provokes grumbling from public art administrators. When asked how they know their work is successful, most organizations and artists that create art in the public realm are quick to cite things like people’s positive comments, or the fact that the artwork doesn’t get covered with graffiti or cause controversy.

We are much less likely to hear about systematic data gathered over a long time period—largely due to the seemingly complex, time-consuming, or futile nature of such a task. Unlike museums or performance spaces, public art traditionally doesn’t sell tickets, or attract “audiences” who can easily be counted, surveyed, or educated. A public artwork’s role in economic revitalization is difficult to separate from that of its overall surroundings. And as Becker suggests, economic indicators of success may leave out important factors like the intrinsic benefits of experiencing art in one’s everyday life.

However, public art administrators generally agree that some type of evaluation is key in not only making a case for support from funders, but in building a successful program. In the words of Chicago Public Art Group (CPAG) executive director Jon Pounds, evaluations can at the very least “help artists strengthen their skills…and address any problems that come up in programming.” Is there a reliable framework that can be the basis of all good public art evaluation? And what are some simple yet effective evaluation methods that most organizations can implement?

This article will explore some of the main challenges with public art evaluation, and then provide an overview of what has been done in this area so far with varying degrees of success. It builds upon my 2007 Columbia University Teachers College Arts Administration thesis, And Then What…? Measuring the Audience Impact of Community-Based Public Art.That study specifically dealt with the issue of measuring audience response to permanent community-based public art, and included interviews with a wide range of public artists and administrators.

This article will discuss evaluation more broadly—moving beyond audience response—and incorporate more recent interviews with leaders in the public art field. My goal was not to generate quantitative data on what people are doing in the field as a whole with evaluation (according to Liesel Fenner, director of Americans for the Arts’s Public Art Network, such data is not yet available, though it is a goal). Instead, I have reviewed recent literature on public art assessment, and interviewed a range of different types of organizations, from government-run “percent for art” and transit programs to grassroots community-based art organizations in New York City (where I am based) and other parts of the United States. I sought to find out whether evaluation is considered important, how much time is devoted to it, and the details of particularly innovative efforts.

The challenge of defining what we are actually evaluating

The term “public art” once referred to monumental sculptures celebrating religious or political leaders. It evolved during the mid-twentieth century to include art meant to speak for the “people” or advance social and political movements, as in the Mexican and WPA murals of the 1930s, or the early community murals of the 1960s-1970s civil rights movements. Today, “public art” can describe anything from ephemeral, participatory performances to illegal street art to internet-based projects. The intended results of various types of public art, and our capacity to measure them, are very different.

In the social science field, evaluation typically involves setting clear goals, or expected outcomes, connected to the main activities of a program or project. It also involves defining indicators that the outcomes have been met. This exercise often takes the form of a “theory of change.” Since there are so many types of public art, it is exceedingly difficult to develop one single “theory of change” for the whole field, but it may be helpful to use a recent definition of public art from the UK-based public art think tank Ixia: “A process of engaging artists’ ideas in the public realm.” This definition implies that public art will always occupy some kind of “public realm”–whether it is a physical place or otherwise-defined community—and require an “engagement” with the public that may or may not result in a tangible artwork as end result. This process and the reactions of the public must be evaluated along with whatever artistic product may come out of it.

The challenge of building a common framework for evaluation

In 2004, Ixia commissioned OPENspace, the research center for inclusive access to outdoor environments based at the Edinburgh College of Art and Heriot-Watt University, to research ways of evaluating public art, ultimately resulting in a comprehensive 2010 report, “Public Art: A Guide to Evaluation” (see a helpful summary by Americans for the Arts). The guide’s emphasis and content was shaped by feedback from Ixia’s Evaluation Seminars and fieldwork conducted by Ixia and consultants who have used its Evaluation Toolkit. Ixia provides the most comprehensive resources on evaluation that I have encountered, with two main evaluation tools, the evaluation matrix and the personal project analysis. These are helpful as a starting point for evaluating any project or program.

The matrix’s goal is to “capture a range of values that may need to be taken into account when considering the desirable or possible outcomes of engaging artists in the public realm.” It is meant to be filled out by various stakeholders during a project-planning stage, as well as at the midpoint and conclusion of a project.

Ixia’s “personal project analysis”is “a tool for process delivery that aims to assess how a project’s delivery is being put into practice.” I will not analyze it in detail here, except to say that something similar should also ideally be part of any organization’s evaluation plan, as it allows for assessing how well the project is being carried out.

Personal Project Analysis from Ixia’s “Public Art: A Guide to Evaluation”

Matrix from Ixia’s “Public Art: A Guide to Evaluation”

Ixia’s matrix identifies four main categories of values:

  1. Artistic Values [visual/aesthetic enjoyment, design quality, social activation, innovation/risk, host participation, challenge/critical debate]
  1. Social Values [community development, poverty and social inclusion, health and well being, crime and safety, interpersonal development, travel/access, and skills acquisition]
  1. Environmental Values [vegetation and wildlife, physical environment improvement, conservation, pollution and waste management-air, water and ground quality, and climate change and energy],
  1. Economic Values [marketing/place identity, regeneration, tourism, economic investment and output, resource use and recycling, education, employment, project management/sustainability, and value for money].

The matrix accounts for the fact that each public artwork’s values and desired outcomes will be different depending on the nature of the presenting organization, site, and audience.

It is unclear how widely these tools have been adopted in the UK since their publication, and I did not encounter anyone in the U.S. using them. Yet many organizations are employing a similar process of engaging various stakeholders during the project-planning phase to determine goals specific to each project, which relate to the categories in Ixia’s matrix. For example, most professionals I interviewed cited some type of “artistic” goals for the work. Some organizations prioritize presenting the highest quality art in public spaces, in which case the realization of an artist’s vision is top priority (representatives of New York City’s Percent for Art program described “Skilled craftsmanship” and “clarity of artistic vision” as key success factors, for example).

By contrast, organizations that include a youth education or community justice component may rank “social” or “economic” values higher. Groundswell Community Mural Project, an NYC-based nonprofit that creates mural projects with youth, asks all organizations that host mural projects (which may include schools, government agencies, and community-based organizations) in pre-surveys to choose their top desired project outcomes from a range of choices, as well as identify project-specific issues. Groundswell does have a well-developed theory of change behind all its projects, relating to the organization’s core mission to “beautify neighborhoods, engage youth in societal and personal transformation, and give expression to ideas and perspectives that are underrepresented in the public dialog.” However, some project-specific outcomes may be more environmental—for example, partnerships with the Trust for Public Land to integrate murals into new school playgrounds–while some relate to “crime and safety,” as in an ongoing partnership with the NYC Department of Transportation to install murals and signs at dangerous traffic intersections that educate the public about traffic safety.

 

Groundswell Community Mural Project, signs from “Traffic Safety Program,” a partnership between Groundswell, the Department of Transportation’s Safety Education program, and several NYC public elemenary schools. Lead artists Yana Dimitrova, Chris Soria, and Nicole Schulman worked with students to create these signs installed at locations identified as most in need of traffic signage.

Groundswell is just one example of many public art organizations that set goals at the outset of each individual project, based on each project’s particular site and community. While individual organizations may effectively evaluate their own projects this way, crafting a common theory of change for all public art may be an unrealistic expectation.

The challenge of reliable indicators and data collection

The Ixia report discusses the process by which indicators of public art’s ability to produce desired outcomes may be identified, with the following questions:

  1. Is it realistic to expect a public art project to influence the outcomes you are measuring?
  2. Is it likely that you can differentiate the impact of the public art project and processes from other influences, e.g., other local investment?
  3. Is it possible to conduct meaningful data on what matters in relation to the chosen indicators?

For example, in studies seeking to measure any kind of change, good data collection should always include a baseline—i.e., economic conditions or attitudes of people BEFORE the public art entered the picture. Data collection methods ideally should also be reliable, unbiased, and easily replicated.

The “Guide to Evaluation” does not go into detail about any concrete indicators of public art’s “impact.” Therefore, the matrix seems to be most useful as a guide to goal-setting. As the Americans for the Arts summary of this report points out, “Ixia directs users to [UK-based] government performance indicators as a baseline source, but that is where the discussion ends.”

Liesel Fenner of Americans for the Arts’s Public Art Network mentioned in an email to me that while PAN hopes to develop a comprehensive list of indicators in the future, which can be shared among public art presenters nationally, “developing quantitative indicators is the main obstacle.”

According to my interviews with both on-the-ground administrators and public art researchers, many busy arts administrators find the type of data collection recommended in Ixia’s guide difficult, costly and time-consuming. It can be a challenge to get artistic staff to buy into even basic evaluation; says one community arts administrator, “artists are paid for a their leadership in developing and delivering a strong project. Many artists don’t see as much value in evaluation because, in part, it comes in addition to the difficult work that they just accomplished.” It is also uncommon to spend precious training resources on something like quantitative evaluation techniques.

Some are of the opinion that even if significant time were spent on justifying public art’s existence by “proving” its practical usefulness, this would still be a losing battle that could lead to the withdrawal of support for public art, the production of bad art that panders merely to public needs, or both. One seasoned public art administrator asked me: “Is architecture evaluated this way? The same way public buildings need to exist, public art needs to exist. It’s people looking to weaken public art who are trying to ask these questions about its impact.”

The challenge of evaluating long-term, permanent installations

Glenn Weiss, former director of the Times Square Alliance Public Art Program and current director of Arts League Houston, posits that economic impact studies are “most possible with highly publicized, short-term projects like the Gates or large public art festivals.” Indeed, the New York City Mayor’s office published a detailed report on “an estimated $254 million in economic activity” that resulted from The Gates, a large installation in Central Park by internationally acclaimed artists Christo and Jeanne-Claude, based on data like increased park attendance and business at nearby hotels, restaurants, etc. However, most public art projects, even temporary ones, are not as monumental or heavily promoted as The Gates, making it difficult to prove that people come to a neighborhood, or frequent its businesses, primarily to see the public art.

Visitors crowd Christo and Jeanne-Claude’s “The Gates” (2005) in Central Park. Photo by Eric Carvin.

Weiss also believes that temporary festivals are generally easier to evaluate quantitatively than long-term public art projects. For example, during a finite event or installation, staff members can keep a count of attendees (some of the temporary public art projects I have encountered in my research, such as the FIGMENT annual participatory art festival on Governors Island and in various other U.S. cities, use attendance counts as a measure).

The few comprehensive studies connecting long-term, permanent public art to economic and community-wide impacts, conducted by research consultants and funded by specific grants, have led to somewhat inconclusive results. For example, An Assessment of Community Impact of the Philadelphia Department of Recreation Mural Arts Program (2002), led by Mark J. Stern and Susan C. Seifert of University of Pennsylvania’s Social Impact of the Arts Project (SIAP), cites the assumed community-wide benefits of murals outlined in MAP’s mission statement at the time of the study:

The creation of a mural can have social benefits for entire communities…Murals bring neighbors together in new ways and often galvanize them to undertake other community improvements, such as neighborhood clean-ups, community gardening, or organizing a town watch. Murals become focal points and symbols of community pride and inspiring reminders of the cooperation and dedication that made their creation possible.

Yet when asked to “use the best data available to document the impact that murals have had over the past decade on Philadelphia’s communities,” Stern and Seifert found that

this is a much more difficult task than one might imagine. First, there are significant conceptual problems involved in thinking through exactly how murals might have an impact on neighborhoods. Second, the quality of data available to test hypotheses concerning murals is limited. Finally, there are a number of methodological problems involved in using the right comparisons in assessing the potential impact of murals. For example, how far from a mural might we expect to see an impact? How long after a mural is painted might it take to see an effect and how long might that effect last?…Ultimately, this report concludes that these issues remain a significant impediment to understanding the role of murals.

By comparing data on murals to existing neighborhood quality of life data, Stern and Seifert considered murals’ connection to factors like community economic investment and indicators of more general neighborhood change (such as reduced litter or crime, or residents’ investment in other community organizing activities). The study also measured levels of community investment and involvement in murals. However, the scarce data available on these factors, according to the authors, are difficult to connect directly to public art in a cause and effect relationship. Stern and Seifert’s strongest finding was that murals may build “social capital,” or “networks of relationships” that can promote “individual and group well-being,” because of all the events surrounding mural production in which people can participate. It was more difficult to show a consistent relationship between murals and other theorized outcomes, such as ability to “inspire” passersby or serve as “amenities” for neighborhoods. The study recommends that “more systematic information on their physical characteristics and sites—‘before and after’—would provide a basis for identifying murals that become an amenity.”

A more recent 2009 report on Philadelphia’s commercial corridors by Econoconsult also demonstrated “some indication of a positive correlation” between the presence of murals and shopping corridor success. Murals are described here as “effective and cost efficient ways of replacing eyesores with symbols of care.” However, the report also adds the disclaimer that a positive correlation is not necessarily proof of the murals’ role as the primary cause of a neighborhood’s appeal.

So what can we assess most easily, and how?

My research revealed that quantitative data on short-term inputs and outputs of public art programs is frequently cited (sometimes inappropriately) as evidence of a program’s success in things like reports or funding proposals—for example, number of new projects completed in one year, number of youth or community partners served, or number of mural tour participants. However, in this article I am not really focusing on this type of reporting, as it does not address how public art impacts communities over time.

The good news is that there are several examples of indicators that are more easily measurable in certain types of public art situations, including permanent installations. These include:

  • Testimonies on the educational and social impact of collaborative public art projects, from youth and community participants and artists alike
  • Qualitative audience responses to public art, including whether or not the art provokes any type of discussion, debate, or controversy
  • How a public artwork is treated over time by a community, including whether it gets vandalized, and whether the community takes the initiative to repair or maintain it
  • Press coverage
  • The “use” of a public artwork by its hosts, e.g. in educational programs or marketing campaigns
  • Levels of audience engagement with public art via internet sites and other types of educational programming

Below I will summarize some helpful methods by which data is collected around all these indicators.

Mining the Press

Archiving press coverage of public art projects online is a common practice among organizations, as is presenting pithy press clippings and quotes in funding proposals and marketing materials as a means of demonstrating a project’s success. For researchers, studying articles (and increasingly, blog posts) on past projects can also provide rich documentation of artworks’ immediate effects, as well as points of comparisons. For example, the “comments” sections of online articles and blogs can generate interesting, often unsolicited feedback, albeit from a nonrandom sample.

One possible outcome of public art projects is controversy, which is not always considered a bad thing, despite now-infamous examples of projects like Richard Serra’s Tilted Arc being removed. For example, Sofia Maldonado’s 42nd Street Mural, presented in March 2010 by the Times Square Alliance, provoked extensive coverage on news programs and blogs. The mural’s un-idealized images of Latin American and Caribbean women based on the artist’s own heritage led some women’s and cultural advocacy organizations to call for its removal. The Alliance opted to leave the mural up, and has cited this project as evidence of the Alliance’s commitment to artists’ freedom of expression. The debates led Maldonado to reflect, “as an art piece it has accomplished its purpose: to establish a dialogue among its spectators.”

Sofia Maldonado, “42nd Street Mural,” 2010, Commissioned by the Times Square Alliance Public Art Program.

Site visits and “public art watch”

As an attempt to promote more sustained observation of completed works over time, public art historian Harriet Senie assigns her students in college and graduate level courses a final term paper project every semester that contains a

“public art watch”…For the duration of a semester, on different days of the week, at different times, students observe, eavesdrop, and engage the audience for a specific work of public art. Based on a questionnaire developed in class and modified for individual circumstances, they inquire about personal reactions to this work and to public art in general” (quoted in Sculpture Magazine).

Senie’s students also observe things like people’s interactions with an artwork, such as how often they stop and look up at it, take pictures in front of it, or use it as a meeting place.

Senie maintains that “Although far from ‘scientific,’ the information is based on direct observation over time—precisely what is in short supply for reviewers working on a deadline.” This approach towards challenging college students to think critically about public art has also been implemented in public art courses at NYU and Pratt Institute, and the aggregate results of student research over time are summarized in one of Senie’s longer publications.

I have not encountered any other organizations able to integrate this type of research into their regular operations; however, there may be opportunities to integrate direct observation into routine site visits to completed permanent public artworks.

In the NYC Percent for Art program, and its Public Art for Public Schools (PAPS) wing that commissions permanent art for new and renovated school buildings, staff members are expected to undertake periodic visits “to monitor the condition of artworks that have been commissioned,” according to PAPS director Tania Duvergne. Such “maintenance checks” can provide opportunities to survey building inhabitants or local residents about their opinions and use of the artworks.

Duvergne uses these “condition report” visits as opportunities to further her agency’s mission to “bridge connections between what teachers are already doing in their classrooms and their physical environments.” At each site, she tries to interview custodians, teachers, principals and students about whether the art is well treated, whether they know anything about the artwork (and are using the online resources available to them), and whether they want more information. Duvergne notes that many teachers use the public art in their teaching in some way, even if they do not know a lot about the artwork. While observing a public artwork during a site visit every few years is nowhere near as extensive and sustained observation as Senie’s class assignment, perhaps a similar survey and observation could be undertaken with a wide range of students and staff members over the course of a day.

Project participant and resident surveys

Organizations that create community-based public art usually have specific desired social, educational, or behavioral outcomes in project participants. Mural organizations Groundswell and Chicago Public Art Group describe thorough evaluation processes in which mural artists, youth, community partners and parents are all surveyed and sometimes interviewed before, during and after projects. Groundswell’s community partner post-project survey, for example, asks partners to rank their level of agreement about whether certain community-wide outcomes have been met, such as whether the mural increases the organization’s visibility, increases awareness of an identified issue, and improves community attitudes towards young people.

Groundswell’s theory of change (most recently honed in 2010 through focus groups with youth participants and community partners) articulates various clear desired outputs and outcomes for both youth and community partner organizations. This includes the development of “twenty-first century” life skills in teen mural participants. To measure this impact specifically, Groundswell has made it a priority to continue to track youth participants after they graduate, turn 21, and reach other checkpoints, according to Executive Director Amy Sananman. Groundswell recently hired an outside researcher to build a comprehensive database (using the free program SalesForce), in which participant data and survey results, and data on completed murals (such as whether any were graffitied, how many times they appeared in news articles, etc.) can be entered and compared to generate reports.

In 2006, Philadelphia’s Mural Arts Program conducted a community impact study using audience response questionnaires as a starting point. Then- special projects manager Lindsey Rosenberg employed college students, through partnerships with local universities, to conduct door-to-door surveys of all residents living within a mile radius of four murals. The murals differed by theme, neighborhood, and level of community involvement. The interns orally administered a multiple-choice questionnaire with questions ranging from general opinions of the murals to level of participation in making the murals to perceptions of changes in the neighborhood as a result of the murals. They then inputted the surveys into a computer database specifically created for this study by outside consultants. The database not only calculated percentages of each response to murals, but tracked correlations between these responses and census demographic data, including income level and home ownership.

This research project was different from prior MAP community impact studies in that it assumed that “what people perceive to be the impact of a mural is in itself valuable,” as much as external evidence of change.

In 2007, MAP shared some preliminary results of this endeavor with me to aid my thesis research. At the time the research seemed to generate some useful data on which murals were appreciated most in which neighborhoods, and the correlation between appreciation and community participation in the projects. However, since then I have not been able to gather any further information on this study, or find any published results. I did hear from MAP at the time of the study that only 25% of people who were approached actually took the surveys, indicating just one problematic aspect of conducting such research on a regular basis. The database was also costly.

Most recently, MAP is partnering (page 160) with the Philadelphia Department of Behavioral Health & Mental Retardation Services (DBH/MRS), community psychologists from Yale, and almost a dozen local community agencies and funders with core support from the Robert Wood Johnson Foundation, on “a multi-level, mixed methods comparative outcome trial known as the Porch Light Initiative. The Porch Light Initiative examines the impact of mural making as public art on individual and community recovery, healing, and transformation and utilizes a community-based participatory research (CBPR) framework.” Unfortunately, MAP declined my requests for more information on this new study.

Interviewing youth and community members can of course only generate observations and opinions, but Groundswell at least is also taking the step of also tracking what happens to participants after they complete a mural project. I am still not clear how to prove that any impacts on participants are a direct result of public art projects. Yet surveying project participants and community members about their feelings about a program or project, and how they think they were impacted by it, is one of the most do-able types of research (apart from the challenges of getting people to fill out surveys).

Community-based “proxies”

Groundswell director Amy Sananman has described some success in utilizing community partners as “proxies” for reporting on a mural’s local impact, effectively outsourcing some of the burden of data collection to other organizations. For example, the director of a nonprofit whose storefront has a Groundswell mural could report back to Groundswell on the extent to which local residents take care of the mural, how often people comment on it, etc.

PAPS, CPAG, and ArtBridge, an organization that commissions artwork for vinyl construction barrier banners, have described similar ideas for partnerships. ArtBridge hopes to implement a more formal process in which the owners of stores where its banners are installed can document changes like increased business due to public art. PAPS director Tania Duvergne also cites examples of “successful projects” in which public schools, on their own, designed art gallery displays or teaching curricula around their public art pieces, and shared this with PAPS on site visits.

There might be a danger in depending on community partner organization representatives to speak for the whole “community” or to provide reliable, accurate data. But if cooperative partners can be identified and regular reporting scheduled using consistent measurement tools, the burden of reporting on specific neighborhoods is lessened for the public art organization.

“Smart” Technology

Groundswell, ArtBridge, and MAP are all starting to utilize the new QR code smartphone application, which uses QR codes to direct public art site visitors to websites with more information about the art. Groundswell experimented this past summer with adding QR codes to a series of posters designed by its Voices Her’d Visionaries program to be hung in public schools to educate teens about healthy relationships. Groundswell can then track how many hits the website gets through the QR app. In general, web activity on public art sites is an easy quantitative measure of public interest.

Philadelphia’s Mural Arts Program has a “report damage” section on its website, where anyone who notices a mural in need of repair can alert MAP online. This is also a potential source for quantitative evidence of how many people notice and feel invested in murals.

Use of Interpretive Programming

Public art organizations are increasingly designing interpretive programming around completed artwork, from outdoor guided tours to curated “virtual” artwork displays. NYC’s Metropolitan Transit Authority’s Arts for Transit program provides downloadable podcasts about completed artworks on its website; other organizations include phone numbers to call for guided tours at public art sites themselves (as in many museum exhibits). Both in-person and virtual/phone tours can provide rich opportunities to track usage, collect informal feedback from participants, and solicit feedback via surveys. ArtBridge recently initiated its WALK program giving tours of its outdoor banner installations. After each tour, ArtBridge emails a link to a brief questionnaire to all tour participants, and offers a prize as an incentive for taking the survey.

A Philadelphia Mural Arts Program guided tour.

Concluding remarks: What next for evaluation?

While systematic, reliable quantitative analysis of public art’s impact at the neighborhood level remains challenging and undervalued in the field, new technologies as well as effective partnerships are making it increasingly feasible for public art organizations to assess factors such as audience engagement, benefits to participants, and community stewardship of completed public art works. The Ixia “Guide to Evaluation” offers a useful roadmap for approaching the evaluation of any type of public art project. At the same time, we should not forget the ability of art to affect people in ways that may seem intangible or even immeasurable, or, as Glenn Weiss puts it, “become part of a memory of a community, part of how a community sees itself.”

(Enjoyed this post? We’re raising funds through this Thursday to make the next generation of Createquity possible. We’re getting close, but need your help to cross the finish line. Please consider a tax-deductible donation today!)

Share
Leave a comment

[Createquity Reruns] On Stories vs. Data

(Createquity’s summer rerun programming continues this week with a focus on arts research! Over the next few months, we’re reaching into the archives to pull out some of the best articles and most underrated gems we’ve published since 2007. This post was originally written by me for the Fractured Atlas blog in March 2011, and argues that data and stories are much more closely intertwined than the way we talk about them would suggest. -IDM)

Many of us, especially if we’ve been present at a Rocco Landesman speech in the past year or so, are probably familiar with the quote widely attributed to W. Edwards Deming: “In God we trust; all others must bring data.” And if you’ve filled out a final report for any grant recently, you’ve probably come face to face with philanthropy’s insatiable hunger for numbers. Attendance figures, financial data, surveys—all of these and more are increasingly becoming an immovable fixture of life in the arts, often to the chagrin of fundraisers and arts administrators.

Much of the recent drive toward measurement in the nonprofit sector is being driven by a new generation of philanthropists, many coming from metrics-obsessed corporate America, who see in numbers the promise of being able to evaluate the effectiveness of their giving with the same facility as evaluating their investments in the stock market. The leaders of this so-called “smart giving” movement carry a strong distrust of anecdotal evidence (GiveWell is pretty much exhibit A for this), and privilege “hard,” rigorously collected data instead. Conveniently, they also typically focus the bulk of their attention and resources on cause areas such as education, poverty, and global health, where data is in much more ready supply.

Caught in the middle of this trend, artists frequently express discomfort with perceived attempts to translate their work into a statistic. For a field that prides itself on expressing the inexpressible, the notion of reducing a potentially life-changing experience to a number doesn’t just feel confusing, it’s kind of insulting. What’s more, fundraisers who work with individual donors often find that, by contrast, a powerful story can do wonders where facts and figures fall flat. (The same could be said for advocates and politicians.)

It’s easy to see why artists and administrators might prefer stories to data. A story is rich, full of detail and shape. Data is flat. Put another way, data is mined from the common ground between various stories, which means that in order for it to work, for it to be converted into the language of numbers, you have to exclude extraneous information. Even if that “extraneous” information happens to be really interesting and cool and sums up exactly why we do what we do!

The reason stories work for us as human beings is because they are few in number. We can spend two hours watching a documentary, or a week reading a history book, and get a really deep qualitative understanding of what was going on in a specific situation or in a specific case. The problem is that we can only truly comprehend so many stories at once. We don’t have the mental bandwidth to process the experiences of even hundreds, much less thousands or millions of subjects or occurrences. To make sense of those kinds of numbers, we need ways of simplifying and reducing the amount of information we store in each case. So what we do is we take all of those stories and we flatten them: we dry out all of the rich shape and detail that makes up their original form and we package them instead in a kind of mold: collecting a specific and limited set of attributes about each so that we can apply analysis techniques to them in batch. In a very real sense, data = mass-produced stories.

It sounds horrible when I put it like that, right? But it’s an essential process because without it, we can’t be assured that we’re looking at the whole picture. Especially when we’re dealing with a large number of potential cases or examples, if we just concentrate on those that are nearest to us, whether that proximity is measured by geography or social/professional circle or similarity to our own situation, there is a very real risk that we will draw inappropriate conclusions about examples that are a little farther afield. Either random statistical noise (especially in the case of small sample sizes) or a bias that skews the kinds of examples we seek out can contribute to this lack of precision about our conclusions.

So we gain something very significant when we flatten stories into data. At a minimum, if we’re doing it right, we gain the confidence that comes with looking at the whole picture rather than only a piece of it. At its very best, we gain the opportunity to formulate stories out of data – such as in the case of Steve Sheppard’s work on MASS MoCA and the revitalization of North Adams, MA. But we lose something too. We lose the ability to cross-reference obscure details about one of our examples with obscure details about another, and sometimes those obscure details turn out to be pretty important. We lose some of the context for understanding why data points might look the way they do, and depending on how well we’ve constructed our data, that may or may not change the conclusions we draw.

But make no mistake: stories are never incompatible with data. When you or someone you know has an incredible experience at an arts event, or when a troubled child’s life is saved through involvement with the arts, or when people are brought together who wouldn’t otherwise meet because of the arts, those are all great stories – and they’re also data. One could imagine counting the number of lives saved by the arts, scoring the quality of arts events, cataloguing the new connections and friendships made possible through arts activities. I’m not saying it’s easy to do such things, but that doesn’t mean they can’t be done meaningfully and with integrity. I think we need to challenge ourselves as a field to be more creative about how we articulate and measure the ways in which the arts improve lives. The answers that we’re looking for might be closer within our reach than we thought.

(Enjoyed this post? We’re raising funds through this Thursday to make the next generation of Createquity possible. We’re getting close, but need your help to cross the finish line. Please consider a tax-deductible donation today!)

Share
Leave a comment

[Createquity Reruns] Our View of Creative Placemaking, Two Years In

(The two articles reposted earlier this week caused quite a stir when they were published, and it’s fair to say that they helped shape the public conversation around creative placemaking. That stir culminated in a direct response from two officials at the National Endowment for the Arts, Director of the Office of Research and Analysis Sunil Iyengar and Director of Design Jason Schupbach, that was published right here on Createquity in November 2012 and is reprinted below. ArtPlace’s Carol Coletta and Joe Cortright also wrote two responses to “Creative Placemaking Has an Outcomes Problem” and Ann Markusen’s piece. For the most up-to-date thinking on these topics on the part of ArtPlace America (as it’s now known) and the NEA, check out these links respectively. – IDM)

"The Bridge" by artist Elena Colombo, image courtesy of ArtsQuest.

“The Bridge” by artist Elena Colombo, image courtesy of ArtsQuest (MICD25 grantee)

We continue to be grateful for the level of national discourse that has emerged since the National Endowment for the Arts’ introduction of Our Town, the federal government’s signature investment in creative placemaking. In particular, Createquity has published a number of blog posts that have provided us with valuable feedback. They have also raised insightful questions about the program resources and research needs for an initiative of this size and scale.

So much has been happening at the NEA – and some of the most vibrant conversations have been based in part on incomplete or out-of-date information – that we thought it made sense to run through our accomplishments and goals, now that we are in the second year of Our Town grantmaking. (If, after reading this post, you want to know even more about what is happening across the country, please take a look at the current issue of NEA Arts: “Arts and Culture at the Core: A Look at Creative Placemaking.”)

 

Background

When Rocco Landesman arrived at the NEA in 2009, he put a name on something he saw happening all across this country, from the Little Haiti neighborhood in Miami, Florida, to the cultural district that sprang up around the Museum of Glass in Tacoma, Washington: cities and towns were using the arts to help shape their social, physical, and economic characters. We were rich in anecdotes, but individual communities and organizations lacked the opportunity to connect with others doing similar work.

Like any good producer, Rocco realized that we could not create a community of practice without a name for our shared endeavor, and so the phrase “creative placemaking” was introduced into our national lexicon. Two efforts quickly followed: a white paper by Ann Markusen and Anne Gadwa Nicodemus that defined this sector of work, and a national convening of 40 experts in the arts, community development, and research. This diverse group launched a conversation about how to measure the presence and impact of the arts in U.S. communities.

 

The grant program

Both of these efforts helped inform the design of Our Town, which makes grants to partnerships among arts and design organizations and local governments to increase community livability through the arts. By framing the conversation around how communities can use the arts to contribute positively to shared priorities, rather than adopting the more traditional approach of simply stating what the arts organizations would like to do and asking for support to do it, Our Town projects have attracted an impressively diverse range of partners. These have included social service agencies, botanic gardens, schools, religious institutions, scientific organizations, local businesses, and business improvement districts.

Through two rounds, we have now invested more than $11.5 million in Our Town grants to 131 communities in all 50 states and the District of Columbia. Along the way, we learned an important lesson: creative placemaking is a big and inclusive tent, and in order to make sense of this emerging sector, we need to look at the specific sub-communities it contains. As grant administrators, we find that it helps to consider Our Town projects in terms of these sub-communities at different points of the award cycle.

From a grant-making point of view, for example, we sort applications into three subsets for review: arts engagement projects, cultural planning and design projects, and projects in non-metro and tribal communities. This is far from a mutually exclusive / completely exhaustive taxonomy, but for our review panels, this division allows Our Town grant applications to be examined in clusters that share similar opportunities, challenges, and access to resources.

Once the grants have been made, and we move into the mode of grants stewardship, it has made more sense for NEA staff to look at the projects based on the specific activities being undertaken. This list is bound to change or grow, but to date, Our Town grants tend to fall into these distinct categories: creating and strengthening artists’ work spaces; asset mapping/cultural district planning; creative industries and entrepreneurship; creating and strengthening cultural facilities; investing in festivals, performances, and other innovative arts programming; reinventing public spaces through creative uses; and the planning and implementation of temporary and permanent public art.

To varying degrees, this taxonomy has subsequently guided our work in evaluating grant projects, conducting national-level research, and creating communities of practice. Let’s take each of these in turn.

 

Grant evaluation

At the NEA, every grant program must help achieve one of five outcomes: creation, engagement, learning, research, or livability. The Our Town grants are all measured against livability, and grantees report to us through a final descriptive report form specific to this outcome.

Unlike private endowment-driven funders, the NEA’s budget is allocated annually by Congress. Despite our name, the NEA does not, in fact, have an endowment, and we are mandated to make our grant decisions anew each year. These facts mean that the NEA cannot commit to funding specific projects over long periods of time, as is the practice with many foundations. (Organizations may, of course, re-apply to the agency.) The ability to make a multi-year commitment to a grantee is the moral prerequisite for doing a multi-year evaluation of that project. So we look at each grantee’s project on its own terms and measure it against its contributions to community livability.

These final descriptive reports allow the NEA to make an evaluation of each grant, but they are also a foundational element in fulfilling our other responsibilities, including both our national research into creative placemaking and our work to build communities of practice.

 

National research

Following publication of the Creative Placemaking white paper, several organizations and individuals approached us, requesting cost-effective solutions for better understanding and communicating the value their work added to their communities. Almost all of these groups were more than adept at documenting their work with images, video, and anecdote, but they lacked easy access to quantitative information.

We felt that we could play a key role in building an infrastructure to address this need. In order to better articulate the concept of livability that underpins the Our Town program, we posited a hypothesis that almost any successful creative placemaking project would make a difference to its community in at least one of four ways: strengthening the infrastructure that supports artists and arts organizations; increasing community attachment; improving quality of life; and/or driving local economies.

These particular dimensions of livability emerged from a review of extant literature, consultations with the field, and an initial review of grant applications. It also grew apparent that these outcomes would be profoundly difficult to measure. So we decided that an appropriate next step would be to develop a framework of arts and livability indicators that would help the field think constructively about how these concepts might be reflected in data already being collected. The indicators are not intended to measure exactly what is happening in creative placemaking projects; they are instead – as the name implies – meant to indicate conditions on the ground that reflect important dimensions of livability and provide insights into relationships that might exist, thus highlighting areas for further research.

By tracking outcomes that are already publicly reported and widely available, we should be able to provide a reasonably reliable indicator of changes to a community’s overall livability. Are all or even some of these changes necessarily due to the presence of creative placemaking activities? Absolutely not–but at least they are the kinds of community-wide outcomes that should matter most to people and groups engaged in creative placemaking. By allowing such outcomes to be tracked easily, the indicators system will bypass the need for elaborate and expensive data collection tools and analytics on a project-by-project basis.

How will we know that the indicators we choose are the right ones? Because they will be based on a series of hypotheses soon to be tested in real communities. For instance, we hypothesize that one indicator of a strengthened arts infrastructure might be an increase in the number of employees at arts organizations. An indicator of community attachment might be the average length that a citizen has lived in a community. An indicator of quality of life might be lower crime rates. And an indicator of a strong local economy might be the number of valid business addresses in a community. Each of these example indicators is based on information already collected and made available by the Census Bureau, the FBI, and the U.S. Postal Service.

We need to test each hypothesis in multiple communities because a single indicator may not work the same way in every place. For instance, at the NEA, we have spent a lot of time internally debating whether “length of commute” is a potential indicator for increased quality of life. Not surprisingly, those of us who live in urban centers think shorter commute times equate with a higher quality of life; and those of us who live in the suburbs and have chosen homes specifically further way from work, feel that longer commute times better correlate with a higher quality of life.

We are working with a team of researchers from the Urban Institute to explore these kinds of nuances for every indicator, testing and validating each hypothesis in multiple use cases and documenting the ways in which a single indicator is and is not an effective proxy. We are also working with and learning from other federal agencies that are similarly building indicator systems from nationally available data sets. It is possible to use an indicators system very effectively, indeed, but it is also all too easy to misuse one – and we want to do everything we can to avoid such pitfalls.

Our team will also assess whether the appropriate data can be accessed at the geographical level of detail we require. Recently, Ann Markusen shared a summary from Arizona State University Professor Emily Talen that was circulated on a listserv for urban planning researchers. This sort of granular investigation into the data available from, in this case, the American Community Survey is exactly the next order of business for our indicators team. So we are, yet again, indebted to Ann.

If we are successful in creating this indicators framework, then the nation’s arts organizations will have free and easy access to a system that helps them begin to visualize and report on some of the things happening alongside their creative placemaking projects. From a social science perspective, will these metrics prove a causal relationship? Again, absolutely not. But for citizens, funders, civic officials, and business leaders, they will provide a good indication of what is happening. And when viewed alongside qualitative data from the projects themselves, the indicators may provide sufficient evidence to satisfy stakeholders who seek assurance of the projects’ overall value. Others may wish to know more, and if so, the indicators and the qualitative data lay the foundation for further research and project-specific evaluation.

We believe this approach will help demystify data for organizations involved in creative placemaking. An organization might be brilliant at developing an outdoor festival that would literally bring art into the center of the public square. It might also excel at documenting the resulting changes it can observe in the surrounding neighborhood. But it may not be skilled at identifying and analyzing data sets, and it may not have the time or the funds to undertake an expensive and exhaustive research project. These organizations are exactly the target audience for our framework, since we will publish – in plain language ­– the data sets that pass our national validation tests and explain how to extricate only the data that are relevant to, in this case, innovative arts programming.

Photograph by Robert Allen, copyright Trey McIntyre Project, all rights reserved.

Photograph by Robert Allen, copyright Trey McIntyre Project (Our Town grantee).

 

Communities of practice

Taken together, these quantitative and qualitative impacts will allow the NEA to help connect and support communities of practice in creative placemaking.

We have issued an RFP for help in producing documentation that looks at each of the Our Town grantees and asks: what did you set out to do with your project; how did you go about doing your project; how do you know whether you succeeded; and what would you do differently having been through what you went through?

The field has been clamoring for “how-to” information. By combining the final descriptive reports from the Our Town grants, the indicators framework, and the in-depth documentation of each project, we will be able to play matchmaker.

A community that wants examples of successful, federally funded projects can comb through our analysis of the NEA’s final descriptive reports to learn which other communities have succeeded.

A community that would like to make a major investment in public art will be able to parse the in-depth descriptions of public art projects to see what lessons they can learn.

Even prior to all of those resources being available, we have started trying to create cross-community connections by having Our Town panelists share their insights and experiences in a series of archived webinars. We will do even more of these in the coming months, featuring grantees.

 

Moving forward

We are really only two years into this work, and are proud of all that we have been able to accomplish. But we are also humbled by the work ahead. The good news is that there continues to be national energy and excitement around creative placemaking, and we are eager for any or all feedback.

We hope that there will continue to be a robust conversation in blogs, on listserves, and throughout the Twittersphere. And we also hope that people will continue to feel free to interact directly with the agency. We are always eager to hear from you at schupbachj@arts.gov or iyengars@arts.gov.

(Enjoyed this post? We’re raising funds through July 10 to make the next generation of Createquity possible. We are 56% of the way there, but need your help to cross the finish line. Please consider a tax-deductible donation today!)

Share
Leave a comment

Diane Ragsdale’s Wonderful Words and Nina Simon’s Participatory Museum

We’re feeling the love from two new arts luminaries today: Diane Ragsdale, provocateur and fellow blogger at Jumper, and Nina Simon, Executive Director at the Santa Cruz Museum of Art and History, have thrown their support behind our Indiegogo campaign. Diane offers some inspiring words of support in the video below, and Nina has given us a new campaign perk to offer: signed copies of her thought-provoking book, The Participatory Museum. We only have two copies to give away, and Nina doesn’t sign books very often, so we’re asking for $100 donations for these. Get ‘em while you can!

We’re excited to have passed the $5,000 mark in the campaign, putting us more than 50% of the way to our goal. It’s been utterly humbling to see how many of you think this project deserves your hard-earned dollars. To everyone who has donated or shared the campaign with others, we can’t thank you enough. For those of you just tuning in or still on the fence, we hope Diane and Nina can help convince you. If you value Createquity as a resource and feel that the site is worthy of a financial contribution, there’s never going to be a better time to donate.

If you’re reading this via email, you can check out Diane’s video here.

Share
Leave a comment

[Createquity Reruns] Fuzzy Concepts, Proxy Data: Why Indicators Won’t Track Creative Placemaking Success

(Creative Placemaking Week at Createquity continues with a look back at Ann Markusen’s massive essay raising pointed questions about the research approach adopted by the National Endowment for the Arts and ArtPlace to understand creative placemaking back in 2012. As professor and director of the Project on Regional and Industrial Economics at the University of Minnesota Humphrey School of Public Affairs, Ann is one of the most respected and senior voices in the arts research community, and her critique was particularly notable at the time given that she was co-author of the original white paper commissioned by the NEA to introduce the concept of creative placemaking to the world in the first place. The comment section includes a response from me, among others. -IDM)

“There is nothing worse than a sharp image of a fuzzy concept.” -Ansel Adams
Photo by beast love

Creative placemaking is electrifying communities large and small around the country. Mayors, public agencies and arts organizations are finding each other and committing to new initiatives. That’s a wonderful thing, whether or not their proposals are funded by national initiatives such as the National Endowment for the Arts’s Our Town program or ArtPlace.

It’s important to learn from and improve our practices on this new and so promising terrain. But efforts based on fuzzy concepts and indicators designed to rely on data external to the funded projects are bound to disappoint. Our evaluative systems must nurture rather than discourage the marvelous moving of arts organizations, artists and arts funders out of their bunkers and into our neighborhoods as leaders, animators, and above all, exhibitors of the value of arts and culture.

In our 2010 Creative Placemaking white paper for the NEA, Anne Gadwa Nicodemus and I characterize creative placemaking as a process where “partners… shape the physical and social character of a neighborhood, town, city, or region around arts and cultural activities.” A prominent ambition, we wrote, is to “bring diverse people together to celebrate, inspire, and be inspired.” Creative placemaking also “animates public and private spaces, rejuvenates structures and streetscapes, (and) improves local business viability and public safety,” but arts and culture are at its core. This definition suggests a number of distinctive arenas of experimentation, where the gifts of the arts are devoted to community liveliness and collaborative problem-solving and where new people participate in the arts and share their cultures.

And, indeed, Our Town and ArtPlace encourage precisely this experimental ferment. Like the case studies in Creative Placemaking, each funded project is unique in its artistic disciplines, scale, problems addressed and aspirations for its particular place. Thus, a good evaluation system will monitor the progress of each project team towards its stated goals, including revisions made along the way. NEA’s Our Town asks grant-seekers to describe how they intend to evaluate their work, and ArtPlace requires a monthly blog entry. But rather than more formally evaluate each project’s progress over time, both funders have developed and are compiling place-specific measures based on external data sources that they will use to gauge success: the Arts and Livability Indicators in the case of the NEA, and what ArtPlace is calling its Vibrancy Indicators.

Creative placemaking funders are optimistic about these efforts and their usefulness. “Over the next year or two,” wrote Jason Schupbach, NEA’s Director of Design, last May, “we will build out this system and publish it through a website so that anyone who wants to track a project’s progress in these areas (improved local community of artists and arts organizations, increased community attachment, improved quality of life, invigorated local economies) will be able to do so, whether it is NEA-funded or not. They can simply enter the time and geography parameters relevant to their project and see for themselves.”

Over the past two years, I have been consulting with creative placemaking leaders and given talks to audiences in many cities and towns across the country and abroad. Increasingly, I am hearing distress on the part of creative placemaking practitioners about the indicator initiatives of the National Endowment for the Arts and ArtPlace. At the annual meetings of the National Alliance for Media Arts and Culture last month, my fellow Creative Placemaking panel members, all involved in one or more ArtPlace- or Our-Town-funded projects, expressed considerable anxiety and confusion about these indicators and how they are being constructed. In particular, many current grantee teams with whom I’ve spoken are baffled by the one-measure-fits-all nature of the indicators, especially in the absence of formal and case-tailored evaluation.

I’ll confess I’m an evidence gal. I fervently believe in numbers where they are a good measure of outcomes; in secondary data like Census and the National Center for Charitable Statistics where they are up to the task; in surveys where no such data exist; in case studies to illuminate the context, process, and the impacts people tangibly experience; in interviews to find out how actors make decisions and view their own performance. My own work over the past decade is riddled with examples of these practices, including appendices intended to make the methodology and data used as transparent as possible.

So I embrace the project of evaluation, but am skeptical of relying on indicators for this purpose. In pursuing a more effective course, we can learn a lot from private sector venture capital practices, the ways that foundations conduct grantee evaluations, and, for political pitfalls, defense conversion placemaking experiments of the 1990s.

 

Learning from Venture Capital and Philanthropy

How do private sector venture capital (VC) firms evaluate the enterprises they invest in? Although they target rates of return in the longer run, they not do resort to indicators based on secondary data to evaluate progress. They closely monitor their investees—small firms who often have little business experience, just as many creative placemaking teams are new to their terrain. VC firms play an active role in guiding youthful companies, giving them feedback germane to their product or service goals. They help managers evaluate their progress and bring in special expertise where needed.

Venture capital firms are patient, understanding realistic timelines. The rule of thumb is that they commit to five to seven years, though it may be less or more. Among our Creative Placemaking cases, few efforts succeeded in five years, while some took ten to fifteen years.

VC firms know that some efforts will fail. They are attentive to learning from such failures and sharing what they learn in generic form with the larger business community. Both ArtPlace and the NEA have stated their desire to learn from success and failure. Yet generic indicators, their chosen evaluation tools, are neither patient or tailored to specific project ambitions. Current Our Town and ArtPlace grant recipients worry that the 1-2 years of funding they’re getting won’t be enough to carry projects through to success or establish enough local momentum to be self-sustaining. Neither ArtPlace nor Our Town have a realistic exit strategy in place for their investments, other than “the grant period’s over, good luck!”

Hands-on guidance is not foreign to nonprofit philanthropies funding the arts. Many arts program officers act as informal consultants and mentors to young struggling arts organizations and to mature ones facing new challenges. My study with Amanda Johnson of Artists’ Centers shows how Minnesota funders have played such roles for decades. They ask established arts executive directors to mentor new start-ups, a process that the latter praised highly as crucial to their success. The Irvine and Hewlett Foundations are currently funding California nonprofit intermediaries to help small, folk and ethnic organizations use grant monies wisely. They also pay for intermediaries across sectors (arts and culture, health, community development and so on) to meet together to learn what works best.

The NEA has hosted three webinars at which Our Town panelists talk about what they see as effective projects/proposals, a step in this direction. But these discussions are far from a systematic gathering and collating of experience from all grantees in ways that would help the cohorts learn and contact those with similar challenges.

 

The Indicator Impetus

Why are the major funders of creative placemaking staking so much on indicators rather than evaluating projects on their own aspirations and steps forward? Pressure from the Office of Management and Budget, the federal bean-counters, is one factor. In January of 2011, President Obama signed into law the Government Performance and Modernization Act (GPRA), updating the original 1993 GPRA, and a new August 2012 Circular A11 heavily emphasizes use of performance indicators for all agencies and their programs.

As a veteran of research and policy work on scientific and engineering occupations and on industrial sectors like steel and the military industrial complex, I fear that others will perceive indicator mania as a sign of field weakness. To Ian David Moss’s provocative title “Creative Placemaking has an Outcomes Problem,” I’d reply that we’re in good company. Huge agencies of the federal government, like the National Science Foundation, the National Institutes of Health and NASA, fund experiments and exploratory development without asking that results be held up to some set of external indicators not closely related to their missions. They accept slow progress and even failure, as in cancer research or nuclear fusion, because the end goal is worthy and because we learn from failure. Evaluation by external generic indicators fails to acknowledge the experimental and ground-breaking nature of these creative-placemaking initiatives and misses an opportunity to bolster understanding of how arts and cultural missions create public value.

 

Why Indicators Will Disappoint I: Definitional Challenges

Many of the indicators charted in ArtPlace, NEA Our Town, and other exercises (e.g. WESTAF’s Creative Vitality Index) bear a tenuous relationship to the complex fabric of communities or specific creative placemaking initiatives. Terms like “vitality,” “vibrancy,” and “livability” are great examples of fuzzy concepts, a notion that I used a decade ago to critique planners and geographers’ enamoration with concepts like “world cities” and “flexible specialization.” A fuzzy concept is one that means different things to different people, but flourishes precisely because of its imprecision. It leaves one open to trenchant critiques, as in Thomas Frank’s recent pillorying of the notion of vibrancy.

Take livability, for instance, prominent in the NEA’s indicators project. One person’s quality of life can be inimical to others’. Take the young live music scene in cities: youth magnet, older resident nightmare. Probably no worthy concept, as quality of life is, has been the subject of so many disappointing and conflicting measurement exercises.

Just what does vibrancy mean? Let’s try to unpack the term. ArtPlace’s definition: “we define vibrancy as places with an unusual scale and intensity of specific kinds of human interaction.” Pretty vague and….vibrancy are places? Unusual scale? Scale meaning extensive, intensive? Of specific kinds? What kinds? This definition is followed by: “While we are not able to measure vibrancy directly, we believe that the measures we are assembling, taken together, will provide useful insights into the nature and location of especially vibrant places within communities.” If I were running a college or community discussion session on this, I would put the terms “vibrancy, places, communities, measures,” and so on up on the board (so to speak), and we would undoubtedly have a spirited and inconclusive debate!

And what is the purpose of measuring vibrancy? Again from the same ArtPlace LOI: “…the purpose of our vibrancy metrics is not to pronounce some projects ‘successes’ and other projects ‘failures’ but rather to learn more about the characteristics of the projects and community context in which they take place which leads to or at least seems associated with improved places.” Even though the above description mentions “characteristics of the projects,” it’s notable that their published vibrancy indicators only measure features of place.

In fact, many of the ArtPlace and NEA indicators are roughly designed and sometime in conflict. While giving the nod to “thriving in place,” ArtPlace emphasizes the desirability of visitors in its vibrancy definition (meaning outsiders to the community); by contrast, the NEA prioritizes social cohesion and community attachment, attributes scarce in the ArtPlace definitions. For instance, ArtPlace proposes to use employment ratio—“the number of employed residents living in a particular geography (Census Block) and dividing that number by the working age persons living on that same block” as a measure of people-vibrancy. The rationale: “vibrant neighborhoods have a high fraction of their residents of working age who are employed.” Think of the large areas of new non-mixed use upscale high-rise condos where the mostly young professional people living there commute daily to jobs and nightly to bars and cafes outside the neighborhood. Not vibrant at all. But such areas would rank high using this measure.

ArtPlace links vibrancy with diversity, defined as heterogeneity of people by income, race and ethnicity. They propose “the racial and ethnic diversity index” (composition not made explicit) and “the mixed-income, middle income index” (ditto) to capture diversity. But what about age diversity? Shouldn’t we want intergenerational activity and encounters too? It is also problematic to prioritize the dilution of ethnicity in large enclaves of recent immigrant groups. Would a thriving heavily Vietnamese city or suburb be considered non-vibrant because its residents choose to live and build their cultural institutions there, facing discrimination in other housing markets? Would an ethnic neighborhood experiencing white hipster incursions be evaluated positively despite decline in its minority populations that result from lower income people being forced out?

Many of the NEA’s indicators are similarly fuzzy. As an indicator of impact on art communities and artists, its August 2012 RFP proposes median earnings for residents employed in entertainment-related industries (arts, design, entertainment, sports, and media occupations). But a very large number of people in these occupations are in sports and media fields, not the arts. The measure does not include artists who live outside the area but work there. And many artists self-report their industry as other than the one listed above, e.g. musicians work in the restaurant sector, and graphic artists work in motion pictures, publishing and so on. ArtPlace is proposing to use very similar indicators—creative industry jobs and workers in creative occupations—as measures of vibrancy.

It is troubling that neither indicator-building effort has so far demonstrated a willingness to digest and share publicly the rich, accessible, and cautionary published research that tackles many of these definitions. See for instance “Defining the Creative Economy: Industry and Occupational Approaches,” the joint effort by researchers Doug DeNatale and Greg Wassall from the New England Creative Economy Project, Randy Cohen of Americans for the Arts, and me at the Arts Economy Initiative to unpack the definitional and data challenges for measuring arts-related jobs and industries in Economic Development Quarterly.

Hopefully, we can have an engaging debate about these notions before indices are cranked out and disseminated. Heartening signs: in its August RFP, the NEA backtracks from its original plan, unveiled in a spring 2012 webinar, to contract for wholesale construction of a given set of indicators to be distributed to grantees. Instead, it is now contracting for the testing of indicator suitability by conducting twenty case studies. And just last week, the NEA issued a new RFP for developing a virtual storybook to document community outcomes, lessons learned and experiences associated with their creative placemaking projects.

 

Why Indicators Will Disappoint II: Dearth of Good Data

If definitional problems aren’t troubling enough, think about the sheer inadequacy of data sources available for creating place-specific indicators.

For more than a half-century, planning and economic development scholars have been studying places and policy interventions to judge success or failure. Yet when Anne Gadwa Nicodemus went in search of research results on decades of public housing interventions, assuming she could build on these for her evaluation of Artspace Projects’ artist live/work and studio buildings, she found that they don’t really exist.

Here are five serious operational problems confronting creative placemaking indicator construction. First, the dimensions to be measured are hard to pin down. Some of the variables proposed are quite problematic—they don’t capture universal values for all people in the community.

Take ArtPlace’s cell phone activity indicator, for instance, which will be used on nights and weekends to map where people congregate. Are places with cell activity to be judged as more successful at creative placemaking? Cell phone usage is heavily correlated with age, income and ethnicity. The older you are, the less likely you are to have a cell phone or use it much, and the more likely to rely on land-lines, which many young people do without. At the November 2012 American Collegiate Schools of Planning annual meetings, Brettany Shannon of University of Southern California presented research results from a survey of 460 LA bus riders showing low cell phone usage rates among the elderly, particularly Latinos. Among those aged 18-30, only 9% of English speakers and 15% of Spanish speakers had no cell phone, compared with 29% of English speakers over age 50 and 54% of Spanish speakers. A cell phone activity measure is also likely to completely miss people attending jazz or classical music concerts, dramas, and religious cultural events where cell phones are turned off. And what about all those older folks who prefer to sit in coffee shops and talk to each other during the day, play leadership roles in the community through face-to-face work, or meet and engage in arts and cultural activities around religious venues? Aren’t they congregating, too?

Or take home ownership and home values, an indicator the NEA hopes to use. Hmmm… home ownership rates—and values—in the US have been falling, in large part due to overselling of homes during the housing bubble. Renting is a just as respectable an option for place lovers, especially young people, retirees, and lower-income people in general. Why would we want grantees to aspire to raise homeownership rates in their neighborhoods, especially given gentrification concerns? Home ownership does not insulate you against displacement, because as property values rise, property taxes do as well, driving out renters and homeowners alike on fixed or lower incomes. ArtPlace is developing “measures of value, which capture changes in rental and ownership values…” This reads like an invitation to gentrification, and contrary to the NEA’s aspirations for creative placemaking to support social cohesion and community attachment.

Second, most good secondary data series are not available at spatial scales corresponding to grantees’ target places. ArtPlace’s vibrancy exercise aspires to compare neighborhoods with other neighborhoods, but available data makes this task almost impossible to accomplish at highly localized scales. Some data points, like arts employment by industry, are available only down to the county level and only for more heavily populated counties because of suppression problems (and because they are lumped together with sports and media in some data sets). Good data on artists from the Census (Public Use Microdata Sample) and American Community Surveys, the only database that includes the self-employed and unemployed, can’t be broken down below PUMA (Public Use Microdata Areas) of 100,000 people that bear little relationship to real neighborhoods or city districts (see Crossover, where we mapped artists using 2000 PUMS data for the Los Angeles and Bay Area metros).

Plus, many creative placemaking efforts have ambitions to have an impact at multiple scales. Gadwa Nicodemus’s pioneering research studies, How Artist Space Matters and How Art Spaces Matter II, looked in hindsight at Artspace’s artist live/work and mixed use projects where the criteria for success varied widely between projects and for various stakeholders involved in each. Artists, nonprofit arts organizations, and commercial enterprises (e.g. cafes) in the buildings variously hoped that the project would an impact on the regional arts community, neighborhood commercial activity and crime rates, and local property values. The research methods included surveys and interviews exploring whether the goals of the projects have been achieved in the experience of target users. Others involve complex secondary data manipulation to come up with indicators that are a good fit. Gadwa Nicodemus’s studies demonstrate how much work it is to document real impact along several dimensions, multiple spatial scales, and a long enough time periods to ensure a decent test. Her indicators, such as hedonic price indices to gauge area property value change, are sophisticated, but also very time- and skill-intensive to construct.

Third, even if you find data that address what you hope to achieve, they are unlikely be statistically significant at the scales you hope for. In our work with PUMS data from the 2000 Census, a very reliable 5% sample, we found we could not make reliable estimates of artist populations at anything near a neighborhood scale. To map the location of artists in Minneapolis, we had to carve the city into three segments based on PUMA lines, and even then, we were pushing the statistical reliability hard (Artists’ Centers, Figure 3, p. 108).

Some researchers are beginning to use the American Community Survey, a 1% sample much smaller than the decennial Census PUMS 5%, to build local indicators, heedless of this statistical reliability challenge. ArtPlace, for instance, is proposing to use ACS data to capture workers in creative occupations at the Census Tract level. See the statistical appendix toLeveraging Investments in Creativity (LINC)’s Creative Communities Artist Data User Guide for a detailed explanation of this problem. Adding the ACS up over five years, one way of improving reliability, is problematic if you are trying to show change over a short period of time, which the creative placemaking indicators presumably aspire to do.

Fourth, charting change over time successfully is a huge challenge. ArtPlace intends to “assess the level of vibrancy of different areas within communities, and importantly, to measure changes in vibrancy over time in the communities where ArtPlace invests.” How can we expect projects that hope to change the culture, participation, physical environment and local economy to show anything in a period of one, two, three years? More ephemeral interventions may only have hard-to-measure impacts in the year that they happen, even if they catalyze spinoff activities, while the potentially clearer impact of brick-and-mortar projects may take years to materialize.

We know from our case studies and from decades of urban planning and design experience that changes in place take long periods of time. For example, Cleveland’s Gordon Square Arts District, a case study in Creative Placemaking, required at least five years for vision and conversations to translate into a feasibility study, another few years to build the streetscape and renovate the two existing shuttered theatres, and more to build the new one.

Because it’s unlikely that the data will be good enough to chart creative placemaking projects’ progress over time, we are likely to see indicators used in a very different and pernicious way – to compare places with each other in the current time period. But every creative placemaking initiative is very, very different from others, and their current rankings on these measures more apt to reflect long-time neighborhood evolution and particularities rather than the impact of their current activities. I can just see creative placemakers viewing such comparisons and throwing their hands up in the air, shouting, “but.. but…but, our circumstances are not comparable!”

One final indicator challenge. As far as I can tell, there are very few arts and cultural indicators included among the measures under consideration. Where is the mission of bringing diverse people together to celebrate, inspire, and be inspired? Shouldn’t creative placemaking advance the intrinsic values and impact of the arts? Heightened and broadened arts participation? Preserving cultural traditions? Better quality art offerings? Providing beauty, expression, and critical perspectives on our society? Are artists and arts organizations whose greatest talents lie in the arts world to be judged only on their impact outside of this core? Though arts participation is measurable, many of the these “intrinsic” outcomes are challenging data-wise, just as are many of the “instrumental’ outcomes given central place in current indicator efforts. WolfBrown now offers a website that aims to “change the conversation about the benefits of arts participation, disseminate up-to-date information on emerging practices in impact assessment, and encourage cultural organizations to embrace impact assessment as standard operating practice.”

 

The Political Dangers of Relying on Indicators

I fear three kinds of negative political responses to reliance on poorly-defined and operationalized indicators. First, it could be off-putting to grantees and would-be grantees, including mayors, arts organizations, community development organizations and the many other partners to these projects. It could be baffling, even angering, to be served up a book of cooked indicators with very little fit to one’s project and aspirations and to be asked to make sense out of them. The NEA’s recent RFP calls for the development of a user guide with some examples, which will help. Those who have expressed concern report hearing back something like “don’t worry about it – we’re not going to hold you to any particular performance on these. They are just informational for you.” Well, but then why invest in these indicators if they aren’t going to be used for evaluation after all?!

Second, creative placemaking grants create competitors, and that means they are generating losers as well as winners. Some who aren’t funded the first time try again, and some are sanguine and grateful that they were prompted to make the effort and form a team. But some will give up. There are interesting parallels with place-based innovations in the 1990s. The Clinton administration’s post Cold War defense conversion initiatives included the Technology Reinvestment Project, in which regional consortia competed for funds to take local military technologies into the civilian realm. As Michael Oden, Greg Bischak and Chris Evans-Klock concluded in our 1995 Rutgers study (full report available from the authors on request), the TRP failed after just a few years because Members of Congress heard from too many disgruntled constituents. In contrast, the Manufacturing Extension Partnership, begun in the same period and administered by NIST, has survived because after its first exploratory rounds, it partnered with state governments to amplify funding for technical assistance to defense contractors struggling with defense budget implosion everywhere. States, rather than projects, then competed, eager for the federal funds.

Third, and most troubling, funders may begin favoring grants to places that already look good on the indicators. Anne Gadwa Nicodemus raised this in her GIA Reader article on creative placemaking last spring. ArtPlace’s own funding criteria suggest this: “ArtPlace will favor investments… and sees its role as providing venture funding in the form of grants, seeding entrepreneurial projects that lead through the arts and already enjoy strong local buy-in and will occur at places already showing signs of momentum….” Imagine how a proposal to convert an old school in a very low income and somewhat depopulated, minority neighborhood into an artist live/work, studio and performance and learning space would stack up against a proposal to add funding to a new outreach initiative in an area already colonized by young people from elsewhere in the same city. A funder might be tempted to fund the latter, where vibrancy is already indicated, over the other, where the payoff might be much greater but farther down the road.

 

In an Ideal World, Sophisticated Models

In any particular place, changes in the proposed indicators will not be attributable to the creative placemaking intervention alone. So imagine the distress of a fundee whose indicators are moving the wrong way and which place them poorly in comparison to others. Area property values may be falling because an environmentally obnoxious plant starts up. Other projects might look great on indicators not because of their initiatives, but because another intervention, like a new light rail system or a new community-based school dramatically changes the neighborhood.

What we’d would love to have, but don’t at this point, are sophisticated causal models of creative placemaking. The models would identify the multiple actors in the target place and take into account the results of their separate actions. A funded creative placemaking project team would be just one such “actor” among several (e.g. real estate developers, private sector employers, resident associations, community development nonprofits and so on).

A good model would account for other non-arts forces at work that will interact with the various actors’ initiatives and choices. This is crucial, and the logic models proposed by Moss, Zabel and others don’t do it. Scholars of urban planning well know how tricky it is to isolate the impact of a particular intervention when there are so many others occurring simultaneously (crime prevention, community development, social services, infrastructure investments like light rail or street repaving).

Furthermore, models should be longitudinal, i.e. they will chart progress in the particular place over time, rather than comparing one place cross-sectionally with others that are quite unlikely to share the same actors, features and circumstances. If we create models that are causal, acknowledge other forces at work, and are applied over time, “we’ll be able to clearly document the critical power of arts and culture in healthy community development,” reflects Deborah Cullinan of San Francisco’s Intersection for the Arts in a followup to our NAMAC panel.

Such multivariate models, as social scientists and urban planners call them, lend themselves to careful tests of hypotheses about change. We can ask if a particular action, like the siting of an interstate highway interchange or adding a prison or being funded in a federal program like the Appalachian Regional Commission, produces more employment or higher incomes or better quality of life for its host city or neighborhood when compared with twin or comparable places, as Andrew Isserman and colleagues have done in their “quasi-experimental” work (write me for a summary of these, soon to be published).

We can also run tests to see if differentials in city and regional arts participation rates and presence of arts organizations can be explained by differences in funding, demographics, or features of local economies. My teammates and I used Cultural Data Project and National Center for Charitable Statistics data on nonprofit arts organizations in California to do this for all California cities with more than 20,000 residents. Our results, while cross-sectional, suggest that concerted arts and culture-building by local Californians over time leads to higher arts participation rates and more arts offerings than can be explained by other factors. The point is that techniques like these DO take into account other forces (positive and negative) operating in the place where creative placemaking unfolds.

 

Charting a Better Path

It’s understandable why the NEA and ArtPlace are turning to indicators. Their budgets for creative placemaking are relatively small, and they’d prefer to spend them on more programming and more places rather than on expensive, careful evaluations. Nevertheless, designing indicators unrelated to specific funded projects seems a poor way forward. Here are some alternatives.

Commit to real evaluation. This need not be as expensive as it seems. Imagine if the NEA and ArtPlace, instead of contracting to produce one-size-fits-all indicators, were to design a three-stage evaluation process. Grantees propose staged criteria for success and reflect on them at specified junctures. Funding is awarded on the basis of the appropriateness of this evaluative process and continued on receipt of reflections. Funders use these to give feedback to the grantee and retool their expectations if necessary, and to summarize and redesign overall creative placemaking achievements. This is more or less what many philanthropic foundations do currently and have for many years, the NEA included. Better learning is apt to emerge from this process than from a set of indicator tables and graphics. ArtPlace is well-positioned to draw on the expertise of its member foundations in this regard.

Build cooperation among grantees to soften the edge of competition for funds. Convene grantees and would-be grantees annually to talk about success, failures, and problems. Ask successful grantees to share their experience and expertise with others who wish to try similar projects elsewhere. During Leveraging Investments in Creativity’s ten-year lifespan, it convened its creative community leaders annually and sometimes more often, resulting in tremendous cross-fertilization that boosted success. Often, what was working elsewhere turned out to be a better mission or process than what a local group had planned. Again, ArtPlace in particular could create a forum for this kind of cooperative learning. And, as mentioned, NEA’s webinars are a step in the right direction. Imagine, notes my NAMAC co-panelist Deborah Cullinan of Intersection for the Arts, if creative placemaking funders invested in cohort learning over time, with enough longevity to build relationships, share lessons, and nurture collaborations.

Finally, the National Endowment for the Arts and ArtPlace could provide technical assistance to creative placemaking grantees, as the Manufacturing Extension Partnership does for small manufacturers. Anne Gadwa Nicodemus and I continually receive phone calls from people across the country psyched to start projects but needy of information and skills on multiple fronts. There are leaders in other communities, and consultants, too, who know how creative placemaking works under diverse circumstances and who can form a loose consortium of talent: people who understand the political framework, the financial challenges, and the way to build partnerships. Artspace Projects, for instance, has recently converted over a quarter century of experience with more than two -dozen completed artist and arts-serving projects into a consultancy to help people in more places craft arts-based placemaking projects.

Wouldn’t it be wonderful if, in a few years’ time, we could say, look! Here is the body of learning and insights we’ve compiled about creative placemaking–how to do it well, where the diverse impacts are, and how they can be documented. With indicators dominating the evaluation process at present, we are unlikely to learn what we could from these young experiments. An indicators-preoccupied evaluation process is likely to leave us disappointed, with spreadsheets and charts made quickly obsolete by changing definitions and data collection procedures. Let’s think through outcomes in a more grounded, holistic way. Let’s continue, and broaden, the conversation!

(The author would like to thank Anne Gadwa Nicodemus, Deborah Cullinan, Ian David Moss, and Jackie Hasa for thorough reads and responses to earlier drafts of this article.)

(Enjoyed this post? We’re raising funds through July 10 to make the next generation of Createquity possible. We are more than halfway there thanks to the generous support of our readers, but need your help to cross the finish line. Please consider a tax-deductible donation today!)

Share
Leave a comment

[Createquity Reruns] Creative Placemaking Has an Outcomes Problem

(Welcome to Createquity’s summer rerun programming! Over the next few months, we’re reaching into the archives to pull out some of the best articles and most underrated gems we’ve published since 2007. This week, we’re focusing on creative placemaking! The article below was the opening shot in a debate about the emerging practice of using art as a mechanism for place-based change that occupied the pages of Createquity for the better part of a year in 2012-13. Among other things, it was Createquity’s most-read post from shortly after it was published until earlier this year, and spurred a comment section that is well worth reading if you haven’t seen it yet. -IDM)

Art Cars Attack, photo by M Glasgow

Art Cars Attack, photo by M Glasgow

(Note: a follow-up to this post, “In Defense of Logic Models,” is now available here)

“I feel like whenever I talk to artists these days, I should be apologizing,” says Kevin Stolarick, Research Director for the Martin Prosperity Institute at the University of Toronto’s Rotman School of Management. To most in the arts community, Stolarick is better known as Richard Florida’s longtime right-hand man and research collaborator on his bestselling book, The Rise of the Creative Class. Stolarick, who first met Florida just after the academic had cashed the first check for the advance from Basic Books, proceeds to recount how the book’s success led to an explosion of interest from mayors all around the country wanting to redefine their cities as welcoming meccas for Florida’s new Starbucks-drinking, jeans-wearing idea people. Unfortunately, the mayors’ collective interpretation of the lessons from Florida’s book boiled down to, “all we need is to get us some gays and artists and a bike path or two, and our problems will be solved! The problem,” Stolarick tells us, a decade after The Rise of the Creative Class’s publication, “is that it’s a trap.”

This scene is unfolding in a basement auditorium in lower Manhattan, the site of a panel and presentation hosted by the Municipal Art Society of New York to give audiences the first public preview of the ArtPlace vibrancy indicators. ArtPlace, as many readers know, is a private-sector partnership among nearly a dozen leading foundations to support “creative placemaking,” a term invented by officials at the National Endowment for the Arts. Spearheaded by leadership from the NEA, the creation of ArtPlace is perhaps this Endowment’s, and by extension the Obama administration’s, signature achievement in the arts—despite the fact that it doesn’t distribute a cent of government money.

Stolarick’s presence at the event was appropriate, for in many ways it was The Rise of the Creative Class that made the current creative placemaking movement possible. For a time it was the kind of book that smart people buy for all of the other smart people they know – a genuine ideavirus. Florida, more than anyone else, was responsible for conflating creativity, innovation, and artistry in the popular imagination, and among the measures that he and Stolarick developed for the book was a “Bohemian index” associating the concentration of artists in a given metropolitan area with population and employment growth. Though the empirical claims in the book turned out to be built on shaky foundations, they were intuitive (and well-argued) enough that municipal leaders started taking notice. In fact, Carol Coletta, the current director of ArtPlace, was one of the first people to invite Florida to help put his ideas into practice in a real city context as co-organizer of 2003’s Memphis Manifesto Summit. Florida, Stolarick, and their associates became the first widely acknowledged spokespeople for the idea that a vibrant set of opportunities and amenities for creative expression could lead to regional economic prosperity.

But Florida wasn’t the only one drawing public attention to the economic power of the arts over the previous decade. Separately, the Social Impact of the Arts Project at the University of Pennsylvania has been studying the relationship between concentrations of cultural resources and various social and economic outcomes since 1994. As then-Associate Director of the Rockefeller Foundation, Joan Shigekawa commissioned a groundbreaking collaboration between SIAP and The Reinvestment Fund to study the dynamics of culture and urban revitalization, work whose influence can be seen clearly in much of the policy that Shigekawa has since helped develop as Senior Deputy Chairman of the NEA.

SIAP, which is led by Mark Stern and Susan Seifert, cites The Rise of the Creative Class frequently in its publications dating from that period, usually to position its approach in opposition to Florida’s. In fact, in 2008 SIAP published one of the most hilariously brutal program evaluations I’ve ever read, following the attempts of Florida’s Creative Class Group (CCG) to turn around three Knight Foundation communities by inspiring volunteer “catalysts” to drive toward the “4 T’s” of economic development (technology, talent, tolerance, and territorial assets). In that evaluation, Stern and Seifert offer a single overarching criticism: CCG forgot about its outcomes. Much like South Park’s Underpants Gnomes, the project team had a clear idea of what it was putting in to the process and what it hoped to get out of it, but a much vaguer sense of how it was going to get from Phase 1 to Phase 3.

South Park’s Underpants Gnomes, image courtesy Wikipedia

Which brings me to my central point: despite all of the attention paid to this issue in the past year and a half, despite all of the new money that has been committed to the cause, creative placemaking still has an outcomes problem. As a field, we have not yet learned the lessons of the Underpants Gnomes. And until we do, I’m worried that we risk repeating Stolarick’s apology to practitioners a decade hence.

Leaving the dots unconnected

“When times were good,” Kevin Stolarick explains at the ArtPlace vibrancy indicators convening, it was easy for city councils, funders, and others to buy into the ideas in Florida’s book on the strength of his celebrity and qualitative arguments. But now that cities are facing more economic pressure, Stolarick continues, “they’re saying, ‘we need proof – and that’s going to take more than Richard Florida’s next book.’”

“Proof” is a word that seems to give creative placemakers hives these days. Less than two weeks prior to the ArtPlace event, I had participated in a webinar given by the NEA to introduce its Our Town Community Indicators Study. Our Town is the Endowment’s public-sector counterpart to ArtPlace – likewise the brainchild of Rocco Landesman, it granted some $6.6 million to communities for creative placemaking projects across the country in its inaugural round last year. The Community Indicators Study is a multiyear data collection effort whose chief purpose is to “advance public understanding of how creative placemaking strategies can strengthen communities.” Yet when, prompted by researchers who were listening in on the call, the NEA’s Chief of Staff, Jamie Bennett, asked the Deputy Director of NEA’s Office of Research and Analysis about causation vs. correlation, this is the exchange that resulted:

Bennett: …Are you going to in some way be able through this project to prove [for example] that arts had a direct impact in causing the crime rate to go down?

Shewfelt: A lot of the language I’ve used today has been very carefully chosen to avoid suggesting that we are trying to design a way to specifically address the causal relationship between creative placemaking and the outcomes we’re interested in.

As a matter of fact, the NEA has chosen to forgo a traditional evaluation of the Our Town grant program in favor of developing the aforementioned indicator system. The project will no doubt result in a lot of great data, but essentially no mechanism for connecting the Endowment’s investments in Our Town projects to the indicators one sees. A project could be entirely successful on its own terms but fail to move the needle in a meaningful way in its city or neighborhood. Or it could be caught up in a wave of transformation sweeping the entire community, and wrongly attribute that wave to its own efforts. There’s simply no way for us to tell. I hate to be the bearer of bad news, but we can’t accomplish the goal of “advancing understanding of how creative placemaking strategies can strengthen communities” without digging more deeply into the causal relationships that the NEA would prefer to avoid.

The vibrancy indicators that were the subject of the ArtPlace convening face a similar quandary. The purpose of the indicators is to help ArtPlace “understand the impact of [its] investments.” And what is that desired impact? During a webinar delivered to prospective applicants last fall, Coletta declared that “with ArtPlace, we aim to do nothing less than transform economic development in America…to awaken leaders who care about the future of their communities to the fact that they’re sitting on a pile of assets that can help them achieve their ambitions…and that asset is art.”

ArtPlace Theory of Change

ArtPlace Theory of Change

ArtPlace’s investments all have a singular focus on “vibrancy,” a concept defined in its guidelines as “attracting people, activities and value to a place and increasing the desire and the economic opportunity to thrive in a place.” While that was as specific as things got during ArtPlace’s first two rounds of grantmaking, the indicators project, which examines factors as diverse as cell phone use, population density, and home values, will go a long way toward concretizing ArtPlace’s primary lever of community transformation. Even so, ArtPlace doesn’t seem any more eager than the NEA to connect the activities of its grant recipients to the broader vibrancy indicators directly. Though the projects themselves are supposed to have a “transformative” impact on vibrancy, ArtPlace isn’t requiring its grantees to collect any data on how that impact is achieved. Furthermore, ArtPlace’s guidelines state clearly that the consortium has no plans to invest in research on creative placemaking beyond the vibrancy indicators themselves, despite its advocacy goals and a desire to “share the lessons [grantees] are learning to other communities across the U.S.”

To be clear, I don’t mean to question the value of research of the type ArtPlace and Our Town are leading. Efforts such as these, Fractured Atlas’s Archipelago data aggregation and visualization platform, Americans for the Arts’s National and Local Arts Index, Western States Arts Federation’s Creative Vitality Index, and others help to draw a clear picture of a community’s overall cultural and creative health and can serve as an essential tool within a broader research portfolio. But in order for those tools to really come alive in a grantmaking context, they have to be grounded in a clear and rigorous conceptual frame for the how the specific funded activities are going to make a difference, and then integrated into the actual process for selecting grant recipients. And that’s the part still missing from the vast majority of these efforts. In an upcoming article for the Grantmakers in the Arts Reader, Anne Gadwa Nicodemus (who co-authored the original Creative Placemaking white paper for the NEA with her mentor, Ann Markusen) writes, “it’s probably unreasonable to expect that a modest, one-year Our Town grant will move the needle, at least quickly….Because the geographic scale, time horizons, and desired outcomes vary across creative placemaking efforts, one-size-fits-all indicator systems may prove inappropriate.”

Without a clear and detailed theory of how and why creative placemaking is effective, policy and philanthropy to support creative placemaking is hobbled. Attempting to predict and judge impact based on indicator systems alone carries with it at least four problems:

  • It doesn’t give a clear road map for project selection that will identify investments most likely to make a difference. Without previous research demonstrating causal interactions between grants given and differences made, it’s hard to know what effect a new grant will have – much less how to compare the potential effects of hundreds or (in ArtPlace’s case) thousands of competing investment opportunities.
  • It doesn’t give us the tools to go back and analyze why certain projects did and didn’t work. Maybe a public artwork succeeds in drawing people to a neighborhood, but real estate values stay stagnant. Maybe development along a transit corridor was executed on schedule, but ridership is lower than expected. Broad, sector-level indicators will only tell us that the project didn’t work – not why.
  • It doesn’t acknowledge the complex nature of economic ecosystems and the indirect role that arts projects play in them. Many economists agree that talented, highly educated individuals are key to community prosperity. But numerous considerations likely play into their decision to (re)locate in a particular place. When are the arts truly catalytic for a community, and when are they merely icing on the cake? Indicator systems would have no way of telling us on their own.
  • It provides little insight on how to pursue arts-led economic development while avoiding the thorny problems of gentrification. Any thinking around policy interventions must acknowledge the possibility of negative impacts as well as positive ones. In the case of creative placemaking, an attendant worry is that longtime residents of transformed neighborhoods won’t have asked for the change, and may be adversely affected by it. To date, there is little shared understanding of how creative placemaking projects that benefit all community residents are distinguished from those that simply replace poorer residents with wealthier ones.

In her Reader article, Nicodemus writes that

The answer to the question “What is creative placemaking, really?” is that funders and practitioners are making it up in real time. We’ve entered an exciting period of experimentation, which makes sharing information absolutely critical.

In the interest of sharing information, then, I will report out below on some lessons I’ve learned from my own research on the topic over the past five years, as well as from a collaboration with ArtsWave, a funder supporting vibrancy through the arts in the Greater Cincinnati/Northern Kentucky region.

Toward a unified theory of creative placemaking: Filling in the blanks

The major deficiency of the Underpants Gnomes’ business plan was that they attempted to connect their activity (stealing underpants) with their intended impact (profit), without really considering the steps in between. To take an extreme example, if I start an organization called “Artists for World Peace” (there is such an organization, by the way), get some artists together to stand in solidarity, and put on a show, it would be unrealistic of me to expect world peace as the next logical result.

Yet most studies of the connection between the arts and economic development have attempted to measure the direct relationship between arts activities (whether single or in the aggregate) and economic outcomes. For example, the Social Impact of the Arts Project examined the correlation between cultural assets and poverty decline in Philadelphia, and a groundbreaking study by Steve Sheppard compared employment levels and real estate values in North Adams, MA before and after the opening of the Massachusetts Museum of Contemporary Art. These research efforts have done much to shape our collective understanding of urban revitalization through the arts. But they share in common an unfortunate tendency to gloss over the details of exactly how creative activities are responsible for making neighborhoods and communities more attractive, and therefore, more valuable. This gap is especially problematic when one tries to apply the lessons of these studies to a policy or grantmaking context, where the details of how projects are implemented can make all the difference in whether a particular intervention is successful or not.

When I was in graduate school, before I came into contact with any of the research above, I created a simple model of arts-led gentrification to illustrate the specific case of a neighborhood lent a young, “hip” reputation by newly relocated artists. This model is different from others I’ve seen in a few ways. First, it casts neighborhood development as an iterative process, starting with tourism on the local level among artists. In other words, the people who are going to be checking out the happenings in a struggling outpost of the city are not, by and large, yuppies – they are other artists who are colleagues of the ones living in that neighborhood. Second, it emphasizes the role of bars and restaurants as attractors for other neighborhood visitors (including yuppies), whose viability is only made possible by the modest foot traffic generated by arts activities. And finally, it places at the beginning of the process not just arts activities, but specific kinds of arts activities: visible, storefront spaces like galleries and performance venues that signal the presence of art and draw visitors to a particular location.

The Artist Colonization Process

Three years later, some of the thinking reflected above found its way into my grantmaking strategy work with ArtsWave, an local arts agency based in Cincinnati, OH. First, some background: in late 2008, ArtsWave had commissioned a research initiative designed to develop an inclusive public conversation about the arts in the region. Based on hundreds of conversations, interviews, and focus groups with area residents, two key “ripple effect” benefits emerged as especially valued by citizens:

  1. that the arts create a vibrant, thriving economy: neighborhoods are more lively, communities are revitalized, tourists are attracted to the area, etc…and
  2. that the arts create a more connected community: diverse groups share common experiences, hear new perspectives, understand each other better.

To its immense credit, ArtsWave didn’t just sit on these results and continue in the status quo. Instead, the 83-year-old united arts fund underwent a total transformation, taking on a new name and organizational identity, and most importantly, adopting these two themes as the new goals for its grantmaking.

My task, starting in January 2011, was to assist ArtsWave in creating a new framework for funding arts & culture activities based upon the ability of organizations to create vibrancy and connect people in the region. With the help of a volunteer task force consisting of ArtsWave board members, staff, community leaders, and grantee organizations, we worked backwards from the idea of “vibrancy” and ended up with an extraordinarily complex theory of change. Here’s the part that specifically deals with cultural clusters and neighborhood economic development:

Excerpt from ArtsWave theory of change: cultural clusters

Excerpt from ArtsWave theory of change: cultural clusters

Some elements of this model will certainly look familiar, though with some new wrinkles added: evening and weekend hours for storefronts, for example, as well as decreased crime and improved physical spaces (in general, not just arts spaces). ArtsWave, however, extended the concept to apply to regional economic development as well:

Excerpt from ArtsWave theory of change: regional development

Excerpt from ArtsWave theory of change: regional development

Note here that the principal lever for the regional development model is that the Greater Cincinnati/Northern Kentucky region is “differentiated” through the arts. That is to say, it attracts people from outside of the region because it gains a (deserved) reputation for being a more interesting place to be than its peer cities. And what helps differentiate Cincinnati is something we call “extraordinary cultural experiences.” We attach a very specific definition to “extraordinary,” focusing on its literal meaning of “out of the ordinary.” For ArtsWave’s purposes, experiences are extraordinary if they are associated with one of the following:

  • Events or productions with a national or international profile
  • Events or productions that feature something uniquely special about the region
  • Events or productions that feature innovative programming or presentation

Not only do experiences meeting the above criteria help to differentiate the Greater Cincinnati region in the eyes of tourists or prospective residents, they also contribute directly to ArtsWave’s notion of “vibrancy” (the green arrow in the diagram).

What this approach does is explicitly connect the activities of grantees to the broader community change that ArtsWave hopes to create. A key innovation that came out of this process was the distinction between “Sector Outcomes” (in blue) and “Grantee Outcomes” (in purple). We defined grantee outcomes as the farthest point out in the model to which individual organizations could reasonably be held accountable—and those outcomes feed back into the evaluation and selection process at the grant application stage. All other outcomes, the sector outcomes, are a reflection on ArtsWave’s overall strategy, rather than on any one particular investment. This allows us to “aggregate” impact from the level of the individual project to the level of the broader context.

The beauty of designing a model like this is that it allows each assumption embedded in each link on the causal chain to be tested, if necessary. Of course, it would be impractical to do so for every investment a grantmaker might make. But that isn’t necessary. In order to provide the kind of evidence that mayors and other officials are looking for, you only need a few examples to demonstrate replicability. But we have to be sure that those examples really do show the effects of intentional creative placemaking strategy, rather than just a lucky coincidence.

Where We Go From Here

Despite the challenges I discuss in the first part of this article, I’m heartened to see creative placemaking funders taking some positive steps toward a more rigorous theoretical foundation for their work. In particular, ArtPlace is beginning to move in this direction with a list of ten signals grantees can use to judge whether their projects are making a difference. The challenge will be to unpack those relationships with the same rigor as is currently applied to collecting data.

Meanwhile, we would love feedback on the models we have created to describe economic development through the arts. While we are hopeful they can help to move the conversation towards a deeper consideration of the complex mechanisms involved in creating place-based vibrancy, we readily acknowledge that they aren’t perfect. Do they accurately reflect creative placemaking goals and processes? Which aspects of the model are best backed up by existing research and which are shakiest? Which seem intuitively right but have not been studied in depth? What are we leaving out?

If you have comments, questions, or resources to offer, please leave a comment here or get in touch at ian.moss@fracturedatlas.org. And in the meantime, Fractured Atlas will be eagerly researching how emerging evaluation methods in other sectors, such as outcome mapping, most significant change technique, and complexity science, can potentially be applied to the arts.

(Enjoyed this post? We’re raising funds through July 10 to make the next generation of Createquity possible. We are 53% of the way there, but need your help to cross the finish line. Please consider a tax-deductible donation today!)

Share
Leave a comment

[Createquity Reruns] Thoughts on “Thoughts on Effective Philanthropy”: Lessons from my Summer Internship

(Over the next few months, we’re reaching into the archives to pull out some of the best articles and most underrated gems we’ve published since 2007. This week, we’re wrapping up the “Thoughts on Effective Philanthropy” series, which was my first extended think piece for Createquity back in 2007-08. Following my broadsides from the peanut gallery, I managed to land my first bona fide arts philanthropy job that summer at the William and Flora Hewlett Foundation. So how well did my thoughts on effectiveness hold up after actually getting to experience grantmaking from the inside of one of our nation’s largest arts funders? Find out below. -IDM)

As the twenty or so regular readers of this blog will note, I debuted Createquity last October with a rather brash six-episode litany of “Thoughts on Effective Philanthropy” in the realm of the arts. I say brash because, at the time, I had no experience running a philanthropic program; all I had were my outsider impressions as a practicing artist and a seeker of grants on behalf of organizations with budgets ranging from a few thousand dollars to nearly $4 million per year. So I thought it would be telling to look back at those posts, nearly one year later, and see how my impressions may or may not have changed after a summer working for one of the more prominent arts funders in the country. For the sake of simplicity, I’ll address the essays in order in which I wrote them.

Thought I: The Nature of the Arts and Their Impact

Original Thesis: Measuring impact in the arts is totally different from measuring impact in other nonprofit areas, in part because the arts occupy a strange netherworld between the nonprofit and for-profit sectors.

The arts, on the other hand, are a field primarily comprised of organizations that produce a product for consumption, much like for-profit companies. In fact, they are basically for-profit companies without the profit. Their value to society (and selling pitch to funders) presumably lies in their ability to bring products to market that would not have otherwise seen the light of day; otherwise, why fund them at all? However, this definition of value doesn’t match up so well with our traditional notions of social responsibility and moral imperative. Think about it this way: if a mission-driven nonprofit were to be wildly successful, so successful that it had entirely solved the problem it was created to address, it would have no choice but to shut down. For presenters, museums, galleries, ensembles, and the like, there is no such consideration: wild success is merely an invitation and an opportunity for more activity. And why shouldn’t it be? Arts organizations, much as they might like to believe otherwise, don’t really exist to solve some urgent problem in society. At some level, like for-profit companies, they are self-serving: they promote the art itself (the product) rather than who experiences the art (the customer).

Post-Internship Analysis: As part of the Performing Arts Program’s Year-in-Review process, we actually spent a good chunk of the summer thinking about the purpose of the arts and how to measure impact. Although I still think the basic insight quoted above is an important one, my dialectic greatly oversimplified the nature of the nonprofit sector. For example, there are many arts organizations whose primary mission is social rather than transactional in nature, though these tend to be the exception rather than the rule. And certainly there are whole classes of non-arts nonprofits that are not set up to achieve the kind of “total success” that would enable them to shut down (such as schools, hospitals, or community organizations). That said, the larger point seems clear: measuring impact in the arts is a challenge precisely because there isn’t a lot of agreement or clarity in the field about what it is, exactly, that the arts “should” be doing. Is it enough for them simply to exist? Does it matter if it’s “good art” or “bad art,” or if one can even tell the difference? And if they do provide ancillary benefits to society, as a growing body of research suggests, does highlighting those benefits diminish the so-called “intrinsic” value of arts experiences? These are extraordinarily challenging questions that a single internship could not hope to address. At the moment, the answers largely remain up to individual choice and preference among supporters of the arts, though we did try to answer them for the Hewlett Foundation.

Thought II: Philanthropy and Experimentation

Original Thesis: While evaluating impact is important, more is generally better when it comes to the arts. Therefore, a narrow focus on supporting only “successful” or “proven” organizations misses the point, because the true value of an arts scene lies in the interactions and network effects made possible by thriving clusters of arts organizations.

So if I’m an agency funding the arts, in some sense I’m not so incredibly concerned with the specific effectiveness of each individual organization I’m supporting. Of course you want your money to be used wisely, but it’s a good thing for the size of the art scene to be able to accommodate the full population of artists who want to work in your geographic area of interest; in other words, to grow according to the supply of artists, not audience demand. So it does not make sense, I would argue, only to fund the blue-chip institutions like the art museums, the symphony orchestras, and the major theater companies in hopes (for example) of lending international prominence and legitimacy to the community. Such a top-down approach potentially leaves out a much larger underground network of artists doing their best to scratch out a living with no institutional support, despite creating significant value for their local communities and economies.

Post-Internship Analysis: As it turns out, the notion that smaller, community-oriented arts organizations are undervalued or represent the future is a common theme in creative economy literature, expressed in various forms by Mark Stern and Susan Seifert at Social Impact of the Arts Project, Duncan Webb of Webb Management Services, Richard Florida in The Rise of the Creative Class, and others. And the importance of experimentation and risk-taking in philanthropy writ large has been highlighted by Sean Stannard-Stockton, Lucy Bernholz, the Skoll Foundation, and plenty of other thought leaders in the field. So it’s heartening to know that my views on this are, if not exactly mainstream, at least echoed by actual professionals who are working in this space. With that said, there are still plenty of donors out there who just want to give to the symphony and the art museum, and that is their prerogative. What we really need is more research to understand the effect that multiple organizations in the same geographic area have on each other and the community, and how that varies systematically across different settings.

An analogy came to me this summer when I visited Yosemite National Park. While exploring one of the giant sequoia groves, I came across a placard explaining that until recently, workers would suppress fires in the park that they thought were endangering the sequoias. They changed the policy when they realized that the fires actually help the sequoias grow by improving conditions for young seedlings and reducing competition from other species. I’ve come to believe that arts policymakers tend to their communities’ art scenes much like park rangers, constantly learning the ways of the forest and implementing strategies to ensure a thriving and diverse environment for public enjoyment.

Thought III: (Dis-)Economies of Scale in the Arts

Original Thesis: Narrowing the argument from the previous essay, I contend that giving to large organizations specifically represents a suboptimal use of most foundations’ resources. Many large organizations have high administrative costs or bloated artist fees that are hard to justify, and are only driven higher by the perception that those organizations can raise money hand over fist. (This, of course, puts pressure on those organizations to deliver on those perceptions, increasing competition for fundraising personnel and raising administrative costs yet further.)

In contrast, small arts organizations are extraordinarily frugal with their resources, precisely because they have no resources to speak of. It’s frankly amazing to me what largely unheralded art galleries, musical ensembles, theater companies, dance troupes, and performance art collectives are able accomplish with essentially nothing but passion on their side. A $5,000 contribution that would barely get you into the sixth-highest donor category at Carnegie might radically transform the livelihood of an organization like this. Suddenly, they might be able to buy some time in the recording studio, or hire an accompanist for rehearsals, or redo that floor in the lobby, or even (gasp) PAY their artists! All of which previously had seemed inconceivable because of the poverty that these organizations grapple with. Foundations concerned with “impact” should remember that it’s far easier to have a measurable effect on an organization’s effectiveness when the amount of money provided is not dwarfed by the organization’s budget.

Post-Internship Analysis: This really comes down to thinking about overhead in terms of percentages versus absolute dollars. It makes sense if you buy that the impact of an arts organization is proportional to its budget. But is that true? Is a $10 million organization at least twice as important and successful as a $5 million organization? There seems to be an assumption among many in the field that (on average, at least) it is, but I’m not so sure. An orchestra is only going to employ so many musicians regardless of how big its budget gets. There are only 365 days in the year that a theater company can put on a show. Not to mention that the more money an organization raises, the more connections and relationships it builds in service of raising future money. People like to give to winners, after all. I may be biased by my belief in distributive efficiency, but it still seems to me that we’d be wise as a field to fight against this impulse, and look for those high-risk, high-reward, small-dollar investments that can make all the difference.

Thought IV: Funding Activity, Not Individuals

Original Thesis: Awards or European-style blanket subsidies for artists are problematic because they tend to increase stratification and reward artists more for being visible than for being good. Instead, foundations should look to build and sustain a marketplace in which the currency is artistic merit rather than the ability to draw a crowd.

Where foundations can add value instead is in setting up and supporting systems by which artistic activity is generated in their communities. How might this be accomplished? The first place I would look is what I would call nexuses for art. Where is art shown, produced, performed, bought, sold, consumed, marketed, supported? It’s not just the museums and the concert halls. It’s the dive bars, the galleries, the coffee shops, the off-off-Broadway theaters, the bookstores, the record stores, the radio stations, and the occasional entities that serve as all of these things and more. Finding a way to get money to these organizations is tricky because many of them are set up as for-profit entities. Yet, from the artists’ perspective, many of these tiny businesses fulfill just as important a function as the city’s performing arts center or marquee theater company, despite being labors of love for their proprietors that often operate completely outside of the support structures that exist to make art available to a wider public.

Post-Internship Analysis: I’ve softened my stance a bit on funding individuals, since there are some artists whose activity is not well served by any marketplace, but I still don’t see any reason to be giving out $50,000 grants to established artists. I continue to believe fervently in the second point of the essay, the need to focus on infrastructure in arts communities. Particularly, the connections between nonprofit arts organizations and the for-profit arts industries are not well understood in any sort of systematic way. This is a great opportunity for further research.

Thought V: Meeting the Artists Where They Are

Original Thesis: Arts funders should let artists do their work, and not get too involved with the subject matter or specific details of their creations.

A composer or a playwright is not like a graphic design shop or an IT consulting firm that will create something to a customer’s specifications, no questions asked. The whole point of supporting the arts, to my mind, is to encourage innovation, expectation-challenging, and all what goes along with leading a creative life. Laying out the path ahead of time with too-great specificity potentially squashes the very thing that makes the arts special….I’ve seen projects in the music world greenlighted for little reason other than the possibility of getting a grant for them. Were those always the best projects to undertake, either for the organizations/artists themselves or for the field as a whole (e.g., audiences)? For example, if the most talented artists are unwilling to create works to specification, does that mean that less talented artists receive those opportunities instead and ultimately become better-known to the public as a result? Or if a high-dollar-value grant also includes an educational workshop component, will the panel end up selecting a fine composer who is terrible in the classroom?

Post-Internship Analysis: Luckily for me, this issue just didn’t come up very much during my internship, thanks primarily to the Hewlett Foundation’s philosophy of funding most organizations with general operating support. In general, though, I continue to advocate thinking carefully about how upfront restrictions on grant opportunities can mess with the fundraising and (sometimes) programming strategies of arts organizations.

Thought VI: The Philanthropist as Speculator, Not Gatekeeper

Original Thesis: Grantmakers enjoy a special privilege and thus shoulder an exceptional responsibility to the field by virtue of their access to resources. This isn’t Monopoly money we’re playing with: these are real decisions that affect the lives of real people. As such, grantmakers should seek familiarity with the entire arts community, not just funded organizations.

With that in mind, I would be heartened to see a more proactive approach toward outreach and community presence from grantmaking organizations, particularly foundations. From my perspective as someone representing two small, newish performing ensembles in New York, it seemed like staff members of funding entities attended only events presented by current grantees, if they even attended those. A few, such as NYSCA, had formal “artistic audit” processes by which a potential applicant could request attendance by program staff at a particular performance, but this process had to be initiated by the applicant organization. I knew and still know of no funding organization that makes significant, formalized outreach efforts to more fully understand the arts community that it serves. By “outreach,” I specifically mean measures to amass institutional knowledge, intelligence if you will, about the widest possible range of players in the arena, including organizations that are neither current grantees nor current applicants. To my mind, that’s the only way an organization tasked with supporting an arts community can truly have its “ear to the ground,” so to speak.

Post-Internship Analysis: This was my polite way of saying that funders need to work hard and get out of the office once in a while. In theory, I absolutely stand by this, maybe more so than anything else I’ve written. All through the summer I keenly felt that sense of responsibility of which I speak above, fully aware of the weight my opinions and recommendations suddenly held. However, I found it harder to live up to my own standards in this regard than I anticipated. Even with my very limited portfolio of grant applicants (most of my time was spent on the cultural asset map initiative), it was a challenge to inform myself as much as I wanted. The main stumbling block is the sheer volume of information that must be tracked, prioritized, and deeply understood on a daily basis. Reading a grant application is only the beginning–there’s analysis to be done, facts to be checked, context to be gathered, conversations to be had, performances to attend, and summaries to write up. Multiply that by a few hundred organizations, and you’ve got yourself a pretty decent chunk of work even without considering nonapplicants. This is not to say that a more proactive approach of the kind I envisioned isn’t possible, but it does beg the question of what information is most important and how to gather it efficiently. I wonder if we could learn anything from our equity analyst friends about this.

(Enjoyed this post? We’re raising funds through July 10 to make the next generation of Createquity possible. We are more than halfway there thanks to the generous support of our readers, but need your help to cross the finish line. Please consider a tax-deductible donation today!)

Share
Leave a comment