Fuzzy Concepts, Proxy Data: Why Indicators Won’t Track Creative Placemaking Success

“There is nothing worse than a sharp image of a fuzzy concept.” -Ansel Adams
Photo by beast love

(If you don’t know the name Ann Markusen, you should. As professor and director of the Project on Regional and Industrial Economics at the University of Minnesota Humphrey School of Public Affairs, Ann has become one of the most respected and senior voices in the arts research community over the past decade. Among her best-known recent efforts was her authorship, with Anne Gadwa Nicodemus, of the original Creative Placemaking white paper published by the NEA prior to the creation of the Our Town grant program and ArtPlace funder collaborative. So when she approached me to offer a guest post on evaluation challenges for creative placemaking, building on previous coverage of the topic here at Createquity, I could hardly say no. I hope you enjoy Ann’s piece and I look forward to the vigorous discussion it will no doubt spark. -IDM)

*

Creative placemaking is electrifying communities large and small around the country. Mayors, public agencies and arts organizations are finding each other and committing to new initiatives. That’s a wonderful thing, whether or not their proposals are funded by national initiatives such as the National Endowment for the Arts’s Our Town program or ArtPlace.

It’s important to learn from and improve our practices on this new and so promising terrain. But efforts based on fuzzy concepts and indicators designed to rely on data external to the funded projects are bound to disappoint. Our evaluative systems must nurture rather than discourage the marvelous moving of arts organizations, artists and arts funders out of their bunkers and into our neighborhoods as leaders, animators, and above all, exhibitors of the value of arts and culture.

In our 2010 Creative Placemaking white paper for the NEA, Anne Gadwa Nicodemus and I characterize creative placemaking as a process where “partners… shape the physical and social character of a neighborhood, town, city, or region around arts and cultural activities.” A prominent ambition, we wrote, is to “bring diverse people together to celebrate, inspire, and be inspired.”  Creative placemaking also “animates public and private spaces, rejuvenates structures and streetscapes, (and) improves local business viability and public safety,” but arts and culture are at its core. This definition suggests a number of distinctive arenas of experimentation, where the gifts of the arts are devoted to community liveliness and collaborative problem-solving and where new people participate in the arts and share their cultures.

And, indeed, Our Town and ArtPlace encourage precisely this experimental ferment. Like the case studies in Creative Placemaking, each funded project is unique in its artistic disciplines, scale, problems addressed and aspirations for its particular place. Thus, a good evaluation system will monitor the progress of each project team towards its stated goals, including revisions made along the way. NEA’s Our Town asks grant-seekers to describe how they intend to evaluate their work, and ArtPlace requires a monthly blog entry. But rather than more formally evaluate each project’s progress over time, both funders have developed and are compiling place-specific measures based on external data sources that they will use to gauge success: the Arts and Livability Indicators  in the case of the NEA, and what ArtPlace is calling its Vibrancy Indicators.

Creative placemaking funders are optimistic about these efforts and their usefulness. “Over the next year or two,” wrote Jason Schupbach, NEA’s Director of Design, last May, “we will build out this system and publish it through a website so that anyone who wants to track a project’s progress in these areas (improved local community of artists and arts organizations, increased community attachment, improved quality of life, invigorated local economies) will be able to do so, whether it is NEA-funded or not. They can simply enter the time and geography parameters relevant to their project and see for themselves.”

Over the past two years, I have been consulting with creative placemaking leaders and given talks to audiences in many cities and towns across the country and abroad. Increasingly, I am hearing distress on the part of creative placemaking practitioners about the indicator initiatives of the National Endowment for the Arts and ArtPlace. At the annual meetings of the National Alliance for Media Arts and Culture last month, my fellow Creative Placemaking panel members, all involved in one or more ArtPlace- or Our-Town-funded projects, expressed considerable anxiety and confusion about these indicators and how they are being constructed. In particular, many current grantee teams with whom I’ve spoken are baffled by the one-measure-fits-all nature of the indicators, especially in the absence of formal and case-tailored evaluation.

I’ll confess I’m an evidence gal. I fervently believe in numbers where they are a good measure of outcomes; in secondary data like Census and the National Center for Charitable Statistics where they are up to the task; in surveys where no such data exist; in case studies to illuminate the context, process, and the impacts people tangibly experience; in interviews to find out how actors make decisions and view their own performance. My own work over the past decade is riddled with examples of these practices, including appendices intended to make the methodology and data used as transparent as possible.

So I embrace the project of evaluation, but am skeptical of relying on indicators for this purpose. In pursuing a more effective course, we can learn a lot from private sector venture capital practices, the ways that foundations conduct grantee evaluations, and, for political pitfalls, defense conversion placemaking experiments of the 1990s.

 

Learning from Venture Capital and Philanthropy

How do private sector venture capital (VC) firms evaluate the enterprises they invest in? Although they target rates of return in the longer run, they not do resort to indicators based on secondary data to evaluate progress. They closely monitor their investees—small firms who often have little business experience, just as many creative placemaking teams are new to their terrain. VC firms play an active role in guiding youthful companies, giving them feedback germane to their product or service goals. They help managers evaluate their progress and bring in special expertise where needed.

Venture capital firms are patient, understanding realistic timelines. The rule of thumb is that they commit to five to seven years, though it may be less or more. Among our Creative Placemaking cases, few efforts succeeded in five years, while some took ten to fifteen years.

VC firms know that some efforts will fail. They are attentive to learning from such failures and sharing what they learn in generic form with the larger business community. Both ArtPlace and the NEA have stated their desire to learn from success and failure. Yet generic indicators, their chosen evaluation tools, are neither patient or tailored to specific project ambitions. Current Our Town and ArtPlace grant recipients worry that the 1-2 years of funding they’re getting won’t be enough to carry projects through to success or establish enough local momentum to be self-sustaining. Neither ArtPlace nor Our Town have a realistic exit strategy in place for their investments, other than “the grant period’s over, good luck!”

Hands-on guidance is not foreign to nonprofit philanthropies funding the arts.  Many arts program officers act as informal consultants and mentors to young struggling arts organizations and to mature ones facing new challenges. My study with Amanda Johnson of Artists’ Centers shows how Minnesota funders have played such roles for decades. They ask established arts executive directors to mentor new start-ups, a process that the latter praised highly as crucial to their success. The Irvine and Hewlett Foundations are currently funding California nonprofit intermediaries to help small, folk and ethnic organizations use grant monies wisely. They also pay for intermediaries across sectors (arts and culture, health, community development and so on) to meet together to learn what works best.

The NEA has hosted three webinars at which Our Town panelists talk about what they see as effective projects/proposals, a step in this direction. But these discussions are far from a systematic gathering and collating of experience from all grantees in ways that would help the cohorts learn and contact those with similar challenges.

 

The Indicator Impetus

Why are the major funders of creative placemaking staking so much on indicators rather than evaluating projects on their own aspirations and steps forward? Pressure from the Office of Management and Budget, the federal bean-counters, is one factor.  In January of 2011, President Obama signed into law the Government Performance and Modernization Act (GPRA), updating the original 1993 GPRA, and a new August 2012 Circular A11 heavily emphasizes use of performance indicators for all agencies and their programs.

As a veteran of research and policy work on scientific and engineering occupations and on industrial sectors like steel and the military industrial complex, I fear that others will perceive indicator mania as a sign of field weakness. To Ian David Moss’s provocative title “Creative Placemaking has an Outcomes Problem,” I’d reply that we’re in good company. Huge agencies of the federal government, like the National Science Foundation, the National Institutes of Health and NASA, fund experiments and exploratory development without asking that results be held up to some set of external indicators not closely related to their missions. They accept slow progress and even failure, as in cancer research or nuclear fusion, because the end goal is worthy and because we learn from failure. Evaluation by external generic indicators fails to acknowledge the experimental and ground-breaking nature of these creative-placemaking initiatives and misses an opportunity to bolster understanding of how arts and cultural missions create public value.

 

Why Indicators Will Disappoint I: Definitional Challenges

Many of the indicators charted in ArtPlace, NEA Our Town, and other exercises (e.g. WESTAF’s Creative Vitality Index) bear a tenuous relationship to the complex fabric of communities or specific creative placemaking initiatives. Terms like “vitality,” “vibrancy,” and “livability” are great examples of fuzzy concepts, a notion that I used a decade ago to critique planners and geographers’ enamoration with concepts like “world cities” and “flexible specialization.” A fuzzy concept is one that means different things to different people, but flourishes precisely because of its imprecision. It leaves one open to trenchant critiques, as in Thomas Frank’s recent pillorying of the notion of vibrancy.

Take livability, for instance, prominent in the NEA’s indicators project. One person’s quality of life can be inimical to others’. Take the young live music scene in cities: youth magnet, older resident nightmare.  Probably no worthy concept, as quality of life is, has been the subject of so many disappointing and conflicting measurement exercises.

Just what does vibrancy mean? Let’s try to unpack the term. ArtPlace’s definition: “we define vibrancy as places with an unusual scale and intensity of specific kinds of human interaction.” Pretty vague and….vibrancy are places?  Unusual scale? Scale meaning extensive, intensive? Of specific kinds? What kinds? This definition is followed by: “While we are not able to measure vibrancy directly, we believe that the measures we are assembling, taken together, will provide useful insights into the nature and location of especially vibrant places within communities.”  If I were running a college or community discussion session on this, I would put the terms “vibrancy, places, communities, measures,” and so on up on the board (so to speak), and we would undoubtedly have a spirited and inconclusive debate!

And what is the purpose of measuring vibrancy? Again from the same ArtPlace LOI: “…the purpose of our vibrancy metrics is not to pronounce some projects ‘successes’ and other projects ‘failures’ but rather to learn more about the characteristics of the projects and community context in which they take place which leads to or at least seems associated with improved places.” Even though the above description mentions “characteristics of the projects,” it’s notable that their published vibrancy indicators only measure features of place.

In fact, many of the ArtPlace and NEA indicators are roughly designed and sometime in conflict. While giving the nod to “thriving in place,” ArtPlace emphasizes the desirability of visitors in its vibrancy definition (meaning outsiders to the community); by contrast, the NEA prioritizes social cohesion and community attachment, attributes scarce in the ArtPlace definitions. For instance, ArtPlace proposes to use employment ratio—“the number of employed residents living in a particular geography (Census Block) and dividing that number by the working age persons living on that same block” as a measure of people-vibrancy. The rationale: “vibrant neighborhoods have a high fraction of their residents of working age who are employed.” Think of the large areas of new non-mixed use upscale high-rise condos where the mostly young professional people living there commute daily to jobs and nightly to bars and cafes outside the neighborhood. Not vibrant at all. But such areas would rank high using this measure.

ArtPlace links vibrancy with diversity, defined as heterogeneity of people by income, race and ethnicity. They propose “the racial and ethnic diversity index” (composition not made explicit) and “the mixed-income, middle income index” (ditto) to capture diversity. But what about age diversity? Shouldn’t we want intergenerational activity and encounters too? It is also problematic to prioritize the dilution of ethnicity in large enclaves of recent immigrant groups. Would a thriving heavily Vietnamese city or suburb be considered non-vibrant because its residents choose to live and build their cultural institutions there, facing discrimination in other housing markets? Would an ethnic neighborhood experiencing white hipster incursions be evaluated positively despite decline in its minority populations that result from lower income people being forced out?

Many of the NEA’s indicators are similarly fuzzy. As an indicator of impact on art communities and artists, its August 2012 RFP proposes median earnings for residents employed in entertainment-related industries (arts, design, entertainment, sports, and media occupations). But a very large number of people in these occupations are in sports and media fields, not the arts. The measure does not include artists who live outside the area but work there. And many artists self-report their industry as other than the one listed above, e.g. musicians work in the restaurant sector, and graphic artists work in motion pictures, publishing and so on. ArtPlace is proposing to use very similar indicators—creative industry jobs and workers in creative occupations—as measures of vibrancy.

It is troubling that neither indicator-building effort has so far demonstrated a willingness to digest and share publicly the rich, accessible, and cautionary published research that tackles many of these definitions. See for instance “Defining the Creative Economy: Industry and Occupational Approaches,” the joint effort by researchers Doug DeNatale and Greg Wassall from the New England Creative Economy Project, Randy Cohen of Americans for the Arts, and me at the Arts Economy Initiative to unpack the definitional and data challenges for measuring arts-related jobs and industries in Economic Development Quarterly.

Hopefully, we can have an engaging debate about these notions before indices are cranked out and disseminated. Heartening signs: in its August RFP, the NEA backtracks from its original plan, unveiled in a spring 2012 webinar, to contract for wholesale construction of a given set of indicators to be distributed to grantees. Instead, it is now contracting for the testing of indicator suitability by conducting twenty case studies. And just last week, the NEA issued a new RFP for developing a virtual storybook to document community outcomes, lessons learned and experiences associated with their creative placemaking projects.

 

Why Indicators Will Disappoint II: Dearth of Good Data

If definitional problems aren’t troubling enough, think about the sheer inadequacy of data sources available for creating place-specific indicators.

For more than a half-century, planning and economic development scholars have been studying places and policy interventions to judge success or failure. Yet when Anne Gadwa Nicodemus went in search of research results on decades of public housing interventions, assuming she could build on these for her evaluation of Artspace Projects’ artist live/work and studio buildings, she found that they don’t really exist.

Here are five serious operational problems confronting creative placemaking indicator construction. First, the dimensions to be measured are hard to pin down. Some of the variables proposed are quite problematic—they don’t capture universal values for all people in the community.

Take ArtPlace’s cell phone activity indicator, for instance, which will be used on nights and weekends to map where people congregate. Are places with cell activity to be judged as more successful at creative placemaking? Cell phone usage is heavily correlated with age, income and ethnicity. The older you are, the less likely you are to have a cell phone or use it much, and the more likely to rely on land-lines, which many young people do without. At the November 2012 American Collegiate Schools of Planning annual meetings, Brettany Shannon of University of Southern California presented research results from a survey of 460 LA bus riders showing low cell phone usage rates among the elderly, particularly Latinos. Among those aged 18-30, only 9% of English speakers and 15% of Spanish speakers had no cell phone, compared with 29% of English speakers over age 50 and 54% of Spanish speakers.  A cell phone activity measure is also likely to completely miss people attending jazz or classical music concerts, dramas, and religious cultural events where cell phones are turned off. And what about all those older folks who prefer to sit in coffee shops and talk to each other during the day, play leadership roles in the community through face-to-face work, or meet and engage in arts and cultural activities around religious venues? Aren’t they congregating, too?

Or take home ownership and home values, an indicator the NEA hopes to use. Hmmm… home ownership rates—and values—in the US have been falling, in large part due to overselling of homes during the housing bubble. Renting is a just as respectable an option for place lovers, especially young people, retirees, and lower-income people in general. Why would we want grantees to aspire to raise homeownership rates in their neighborhoods, especially given gentrification concerns? Home ownership does not insulate you against displacement, because as property values rise, property taxes do as well, driving out renters and homeowners alike on fixed or lower incomes. ArtPlace is developing “measures of value, which capture changes in rental and ownership values…” This reads like an invitation to gentrification, and contrary to the NEA’s aspirations for creative placemaking to support social cohesion and community attachment.

Second, most good secondary data series are not available at spatial scales corresponding to grantees’ target places. ArtPlace’s vibrancy exercise aspires to compare neighborhoods with other neighborhoods, but available data makes this task almost impossible to accomplish at highly localized scales. Some data points, like arts employment by industry, are available only down to the county level and only for more heavily populated counties because of suppression problems (and because they are lumped together with sports and media in some data sets). Good data on artists from the Census (Public Use Microdata Sample) and American Community Surveys, the only database that includes the self-employed and unemployed, can’t be broken down below PUMA (Public Use Microdata Areas) of 100,000 people that bear little relationship to real neighborhoods or city districts (see Crossover, where we mapped artists using 2000 PUMS data for the Los Angeles and Bay Area metros).

Plus, many creative placemaking efforts have ambitions to have an impact at multiple scales. Gadwa Nicodemus’s pioneering research studies, How Artist Space Matters and How Art Spaces Matter II, looked in hindsight at Artspace’s artist live/work and mixed use projects where the criteria for success varied widely between projects and for various stakeholders involved in each.  Artists, nonprofit arts organizations, and commercial enterprises (e.g. cafes) in the buildings variously hoped that the project would an impact on the regional arts community, neighborhood commercial activity and crime rates, and local property values. The research methods included surveys and interviews exploring whether the goals of the projects have been achieved in the experience of target users. Others involve complex secondary data manipulation to come up with indicators that are a good fit. Gadwa Nicodemus’s studies demonstrate how much work it is to document real impact along several dimensions, multiple spatial scales, and a long enough time periods to ensure a decent test. Her indicators, such as hedonic price indices to gauge area property value change, are sophisticated, but also very time- and skill-intensive to construct.

Third, even if you find data that address what you hope to achieve, they are unlikely be statistically significant at the scales you hope for. In our work with PUMS data from the 2000 Census, a very reliable 5% sample, we found we could not make reliable estimates of artist populations at anything near a neighborhood scale. To map the location of artists in Minneapolis, we had to carve the city into three segments based on PUMA lines, and even then, we were pushing the statistical reliability hard (Artists’ Centers, Figure 3, p. 108).

Some researchers are beginning to use the American Community Survey, a 1% sample much smaller than the decennial Census PUMS 5%, to build local indicators, heedless of this statistical reliability challenge. ArtPlace, for instance, is proposing to use ACS data to capture workers in creative occupations at the Census Tract level. See the statistical appendix to Leveraging Investments in Creativity (LINC)’s Creative Communities Artist Data User Guide  for a detailed explanation of this problem. Adding the ACS up over five years, one way of improving reliability, is problematic if you are trying to show change over a short period of time, which the creative placemaking indicators presumably aspire to do.

Fourth, charting change over time successfully is a huge challenge. ArtPlace intends to “assess the level of vibrancy of different areas within communities, and importantly, to measure changes in vibrancy over time in the communities where ArtPlace invests.” How can we expect projects that hope to change the culture, participation, physical environment and local economy to show anything in a period of one, two, three years? More ephemeral interventions may only have hard-to-measure impacts in the year that they happen, even if they catalyze spinoff activities, while the potentially clearer impact of brick-and-mortar projects may take years to materialize.

We know from our case studies and from decades of urban planning and design experience that changes in place take long periods of time. For example, Cleveland’s Gordon Square Arts District, a case study in Creative Placemaking, required at least five years for vision and conversations to translate into a feasibility study, another few years to build the streetscape and renovate the two existing shuttered theatres, and more to build the new one.

Because it’s unlikely that the data will be good enough to chart creative placemaking projects’ progress over time, we are likely to see indicators used in a very different and pernicious way – to compare places with each other in the current time period. But every creative placemaking initiative is very, very different from others, and their current rankings on these measures more apt to reflect long-time neighborhood evolution and particularities rather than the impact of their current activities. I can just see creative placemakers viewing such comparisons and throwing their hands up in the air, shouting, “but.. but…but, our circumstances are not comparable!”

One final indicator challenge. As far as I can tell, there are very few arts and cultural indicators included among the measures under consideration. Where is the mission of bringing diverse people together to celebrate, inspire, and be inspired? Shouldn’t creative placemaking advance the intrinsic values and impact of the arts? Heightened and broadened arts participation? Preserving cultural traditions? Better quality art offerings? Providing beauty, expression, and critical perspectives on our society? Are artists and arts organizations whose greatest talents lie in the arts world to be judged only on their impact outside of this core? Though arts participation is measurable, many of the these “intrinsic” outcomes are challenging data-wise, just as are many of the “instrumental’ outcomes given central place in current indicator efforts. WolfBrown now offers a website that aims to “change the conversation about the benefits of arts participation, disseminate up-to-date information on emerging practices in impact assessment, and encourage cultural organizations to embrace impact assessment as standard operating practice.”

 

The Political Dangers of Relying on Indicators

I fear three kinds of negative political responses to reliance on poorly-defined and operationalized indicators.  First, it could be off-putting to grantees and would-be grantees, including mayors, arts organizations, community development organizations and the many other partners to these projects. It could be baffling, even angering, to be served up a book of cooked indicators with very little fit to one’s project and aspirations and to be asked to make sense out of them. The NEA’s recent RFP calls for the development of a user guide with some examples, which will help. Those who have expressed concern report hearing back something like “don’t worry about it – we’re not going to hold you to any particular performance on these. They are just informational for you.” Well, but then why invest in these indicators if they aren’t going to be used for evaluation after all?!

Second, creative placemaking grants create competitors, and that means they are generating losers as well as winners.  Some who aren’t funded the first time try again, and some are sanguine and grateful that they were prompted to make the effort and form a team. But some will give up. There are interesting parallels with place-based innovations in the 1990s. The Clinton administration’s post Cold War defense conversion initiatives included the Technology Reinvestment Project, in which regional consortia competed for funds to take local military technologies into the civilian realm. As Michael Oden, Greg Bischak and Chris Evans-Klock concluded in our 1995 Rutgers study (full report available from the authors on request), the TRP failed after just a few years because Members of Congress heard from too many disgruntled constituents. In contrast, the Manufacturing Extension Partnership, begun in the same period and administered by NIST, has survived because after its first exploratory rounds, it partnered with state governments to amplify funding for technical assistance to defense contractors struggling with defense budget implosion everywhere. States, rather than projects, then competed, eager for the federal funds.

Third, and most troubling, funders may begin favoring grants to places that already look good on the indicators. Anne Gadwa Nicodemus raised this in her GIA Reader article on creative placemaking last spring. ArtPlace’s own funding criteria suggest this: “ArtPlace will favor investments… and sees its role as providing venture funding in the form of grants, seeding entrepreneurial projects that lead through the arts and already enjoy strong local buy-in and will occur at places already showing signs of momentum….” Imagine how a proposal to convert an old school in a very low income and somewhat depopulated, minority neighborhood into an artist live/work, studio and performance and learning space would stack up against a proposal to add funding to a new outreach initiative in an area already colonized by young people from elsewhere in the same city. A funder might be tempted to fund the latter, where vibrancy is already indicated, over the other, where the payoff might be much greater but farther down the road.

 

In an Ideal World, Sophisticated Models

In any particular place, changes in the proposed indicators will not be attributable to the creative placemaking intervention alone. So imagine the distress of a fundee whose indicators are moving the wrong way and which place them poorly in comparison to others. Area property values may be falling because an environmentally obnoxious plant starts up. Other projects might look great on indicators not because of their initiatives, but because another intervention, like a new light rail system or a new community-based school dramatically changes the neighborhood.

What we’d would love to have, but don’t at this point, are sophisticated causal models of creative placemaking. The models would identify the multiple actors in the target place and take into account the results of their separate actions. A funded creative placemaking project team would be just one such “actor” among several (e.g. real estate developers, private sector employers, resident associations, community development nonprofits and so on).

A good model would account for other non-arts forces at work that will interact with the various actors’ initiatives and choices. This is crucial, and the logic models proposed by Moss, Zabel and others don’t do it. Scholars of urban planning well know how tricky it is to isolate the impact of a particular intervention when there are so many others occurring simultaneously (crime prevention, community development, social services, infrastructure investments like light rail or street repaving).

Furthermore, models should be longitudinal, i.e. they will chart progress in the particular place over time, rather than comparing one place cross-sectionally with others that are quite unlikely to share the same actors, features and circumstances. If we create models that are causal, acknowledge other forces at work, and are applied over time, “we’ll be able to clearly document the critical power of arts and culture in healthy community development,” reflects Deborah Cullinan of San Francisco’s Intersection for the Arts in a followup to our NAMAC panel.

Such multivariate models, as social scientists and urban planners call them, lend themselves to careful tests of hypotheses about change. We can ask if a particular action, like the siting of an interstate highway interchange or adding a prison or being funded in a federal program like the Appalachian Regional Commission, produces more employment or higher incomes or better quality of life for its host city or neighborhood when compared with twin or comparable places, as Andrew Isserman and colleagues have done in their “quasi-experimental” work (write me for a summary of these, soon to be published).

We can also run tests to see if differentials in city and regional arts participation rates and presence of arts organizations can be explained by differences in funding, demographics, or features of local economies. My teammates and I used Cultural Data Project and National Center for Charitable Statistics data on nonprofit arts organizations in California to do this for all California cities with more than 20,000 residents. Our results, while cross-sectional, suggest that concerted arts and culture-building by local Californians over time leads to higher arts participation rates and more arts offerings than can be explained by other factors. The point is that techniques like these DO take into account other forces (positive and negative) operating in the place where creative placemaking unfolds.

 

Charting a Better Path

It’s understandable why the NEA and ArtPlace are turning to indicators. Their budgets for creative placemaking are relatively small, and they’d prefer to spend them on more programming and more places rather than on expensive, careful evaluations.  Nevertheless, designing indicators unrelated to specific funded projects seems a poor way forward. Here are some alternatives.

Commit to real evaluation. This need not be as expensive as it seems. Imagine if the NEA and ArtPlace, instead of contracting to produce one-size-fits-all indicators, were to design a three-stage evaluation process.  Grantees propose staged criteria for success and reflect on them at specified junctures. Funding is awarded on the basis of the appropriateness of this evaluative process and continued on receipt of reflections. Funders use these to give feedback to the grantee and retool their expectations if necessary, and to summarize and redesign overall creative placemaking achievements. This is more or less what many philanthropic foundations do currently and have for many years, the NEA included. Better learning is apt to emerge from this process than from a set of indicator tables and graphics.  ArtPlace is well-positioned to draw on the expertise of its member foundations in this regard.

Build cooperation among grantees to soften the edge of competition for funds. Convene grantees and would-be grantees annually to talk about success, failures, and problems. Ask successful grantees to share their experience and expertise with others who wish to try similar projects elsewhere. During Leveraging Investments in Creativity’s ten-year lifespan, it convened its creative community leaders annually and sometimes more often, resulting in tremendous cross-fertilization that boosted success. Often, what was working elsewhere turned out to be a better mission or process than what a local group had planned. Again, ArtPlace in particular could create a forum for this kind of cooperative learning. And, as mentioned, NEA’s webinars are a step in the right direction. Imagine, notes my NAMAC co-panelist Deborah Cullinan of Intersection for the Arts, if creative placemaking funders invested in cohort learning over time, with enough longevity to build relationships, share lessons, and nurture collaborations.

Finally, the National Endowment for the Arts and ArtPlace could provide technical assistance to creative placemaking grantees, as the Manufacturing Extension Partnership does for small manufacturers. Anne Gadwa Nicodemus and I continually receive phone calls from people across the country psyched to start projects but needy of information and skills on multiple fronts. There are leaders in other communities, and consultants, too, who know how creative placemaking works under diverse circumstances and who can form a loose consortium of talent: people who understand the political framework, the financial challenges, and the way to build partnerships. Artspace Projects, for instance, has recently converted over a quarter century of experience with more than two -dozen completed artist and arts-serving projects into a consultancy to help people in more places craft arts-based placemaking projects.

Wouldn’t it be wonderful if, in a few years’ time, we could say, look!  Here is the body of learning and insights we’ve compiled about creative placemaking–how to do it well, where the diverse impacts are, and how they can be documented. With indicators dominating the evaluation process at present, we are unlikely to learn what we could from these young experiments. An indicators-preoccupied evaluation process is likely to leave us disappointed, with spreadsheets and charts made quickly obsolete by changing definitions and data collection procedures. Let’s think through outcomes in a more grounded, holistic way. Let’s continue, and broaden, the conversation!

(The author would like to thank Anne Gadwa Nicodemus, Deborah Cullinan, Ian David Moss, and Jackie Hasa for thorough reads and responses to earlier drafts of this article.)

Note to readers: In addition to the comments below, the National Endowment for the Arts and ArtPlace have now published official responses to this article. Read them here:

Share

7 Comments

  1. Posted November 9th, 2012 at 10:24 am | Permalink

    Ann asked me to weigh in with a response, so here goes.

    I very much agree with one half of Ann’s central thesis: that the only way to get a true understanding of how creative placemaking works is to design an evaluation framework that pays more attention to the activities of individual projects. I’ve addressed this at length in the past, so I won’t belabor the point here. But one thing I will say is that there is a difference between measuring the effects of creative placemaking and measuring the effects of grants for creative placemaking. I think it’s critically important that we invest in the latter if we are to ensure that we’re making the right resource allocation decisions going forward. What bothers me the most about this system not currently being in place is that all sorts of decisions have been made each of the past few years to support one creative placemaking project over another without the benefit of good empirical grounding to guide those decisions. I could understand this (though I still don’t agree with it) for the very first round of grants, but now that we’re entering the third with each of these programs it seems problematic to me. To be fair, I am open to the possibility that it might not make all that much difference in the end – but I don’t think we should assume that’s the case.

    Where Ann and I diverge is our orientation to indicators. I’ve argued in this space previously that I don’t think indicators are useful in solving the specific problem described above, and I continue to feel that way. However, Ann takes the argument further to posit that indicator systems are a waste of time overall and possibly even counterproductive. I’m not sure I can get on board with that. I’m generally of the opinion that some information is better than no information, as long as the partial, imperfect information (and let’s face it, just about all information is partial and imperfect) is properly contextualized and its limitations made transparent. I agree that indicators could be misused by both the creators and consumers of such systems, but that’s true with any research project. Furthermore, indicators can be an important part of the research “supply chain,” feeding into a more in-depth evaluation of the kind that Ann envisions. I’ve never said that the NEA and ArtPlace shouldn’t be doing their indicators projects at all, only that I wish they were prioritizing their research resources towards what I see as a more pressing need. Finally, I think it’s important to acknowledge that the NEA (more so than ArtPlace) has substantially adjusted its plans in response to feedback received from many quarters, including this one, over the past six months or so. In particular, the virtual storybook project, while not completely addressing the concerns above, does bring things closer to the kind of framework we’ve been arguing for.

    What I find most intriguing (and tantalizing) about this piece is Ann’s mention at the end of “sophisticated causal models” for creative placemaking. She correctly points out that none of the models we’ve seen, including the one I helped develop for ArtsWave, accounts for possible inputs into the system besides the arts investments. I’ve been aware of this for some time but have not thus far come up with a good solution, especially in light of the fact that that particular model is so complex already. Since Ann works in so many fields besides the arts, I’d like to ask/challenge her to share the very best such multivariate model that she’s seen, so that we can study it and consider how its innovations might be applied to creative placemaking.

    Thank you, Ann, for the thought-provoking article.

  2. Tony Macklin
    Posted November 9th, 2012 at 1:10 pm | Permalink

    Thanks to you both for the thoughtful article and response!

    Smart, effective evaluation takes more time, discipline, resources, and collaborative spirit than funders and nonprofits are usually willing to dedicate. And, I agree that funders often impose the wrong evaluation systems and indicators on their own initiatives and on individual grantees.

    But I also know too many organizations that are comfortable hiding behind the “cultural development and placemaking are too hard to measure” routine. They’ll continue to lose in the battles for increasingly limited government funds and they’ll miss out on attracting wealthy donors in their 50′s and younger who are learning to ask questions about real impact.

  3. Posted November 12th, 2012 at 12:48 pm | Permalink

    My thanks to Ian for the thoughtful post.

    On “measuring the effects of grants,” I’m not sure why Ian uses the word “measure.” Yes, we should evaluate, but the word measure suggests numbers. I’ve never understood why people distinguish so starkly between quantitative and qualitative forms of evaluation—some disparage all numbers as empiricist and others disparage any form of qualitative findings. Not good social science. We should be asking about outcomes, not just inputs. That is, it’s not enough for fundees to simply state how they used the funds—they should convey how the results match what they anticipated and hoped for.

    And since these are new and experimental efforts, each one of a kind, some amount of failure should be tolerated IF we learn from it. So the most important thing is not the measures, but the sharing of lessons as we go forward. Think of the amazing work done for decades to create better and more disease-resistant fruits and vegetables. Scientists tried many different variants and growing conditions, and when some turned out disappointingly, they welcomed that as valuable information and tweaked their next round accordingly. Similarly, good monitoring of creative placemaking projects will yield insights—on building partnerships, on scale, etc—that will inform future rounds. I’m hoping others more skilled at funding will weigh in on this question!

    Sophisticated causal models. Yes. If we want to demonstrate that a particular creative placemaking intervention is having a positive effect on anything—arts participation, local retail revenues, artists’ incomes—we have to also acknowledge all the other activities that are influencing these same outcomes. This is what good social science models (and the sciences in general) do: they theorize what factors are influencing any particular phenomenon and test for their influences simultaneously. For instance, if you are trying to figure out if a particular drug lowers heart disease in a group of patients, you have to also control for other changes in their lives – successful weight loss, smoking cessation, heightened work stress, etc.

    An early and easy-to-summarize example is the work Peter Hall, Amy Glasmeier and I did on whether US metropolitan high tech job and enterprise growth in the 1980s was due primarily to University R&D efforts, as contended by those who pressed for more such funding at the state level, or to other factors as well (High Tech America, Allen & Unwin,1986). We built a model that included other possibilities: differential military procurement spending, the creation of human capital through University higher education degree programs, and about ten other features of metro economies. We used multiple regression on all 377 US metros, cross-sectionally and over time, to test the direction of influence and size and significance of the effects of each of the contributing variables as well as their joint effects and found that University R&D spending was actually negatively correlated with high tech job and plant growth, while both military spending (prime contracts per capital) and University higher educational offerings were positive and significant contributors. This enabled us to make a powerful argument for higher educational support for technology training but to warn that the fruits of University R&D would like flow out of the host region, a loss from the point of view of economic development.

    Work with metro areas is fairly easy to do, because they are large and because they approximate labor market and live/work areas, so that you can assume there are not sizeable interactions between what goes on inside them and what happens out side of their boundaries. Unfortunately, it is very difficult, for reasons I state in the blog, to do good analysis for neighbors or districts within cities and metro areas. That’s both because the borders of such areas are very porous, with people moving in and out of them daily to live, work, and recreate. And because the data are just very poor.

    Urban planning researchers have long struggled with the problem of how to document changes at the small area level. I take the liberty of reproducing here a summary by Professor Emily Talen of Arizona State University that she circulated on our PLANET (the listserv for academic planning researchers) last spring. Emily had asked members to share how they use and view data on census tracts and blocks, and the responses are very illuminating, and discouraging. Start at the bottom if you’d like to see her original query. Note that the confidence interval approach is what Greg Schrock and I employed in our use of the 2000 PUMS Census data to estimate the presence of and characteristics of artists in the work we did for Leveraging Investments in Creativity.

    Sun, 4 Mar 2012 09:25:09 -0700

    A few weeks ago I asked how researchers are dealing with the problem of lack of census 2010 income/poverty data at the tract and block group levels.

    Thanks to all who responded. Below is a summary of responses.

    The ACS data are the best we’ve got, and I suppose it depends upon what you are using it for. I tend to aggregate tract level data (usually 20 or more tracts), rather than use it for comparisons between individual tracts (and I have since given up on Block Group level analysis aside from the yr 2000, unless its specifically requested). Aggregating tract level data should minimize the error within individual tracts and so instead of dumping out tracts with MOEs greater than 10%, I leave them in. Most of the work I do involves trend comparisons, so if I find something unexpectedly off from the 2000 data, I dig through to look at the MOEs more carefully, but I’ve found that the larger the sample/number of tracts, the better the estimates look. Otherwise, if it’s a smaller area (more likely to have error) I simply cite it as an estimate. Unfortunately, it is what it is.

    ******

    I inquired with the Center for Social and Demographic Analysis. The response was quite similar to X [see below], regarding the error in the Census vs. ACS. However, they recommended a recent Brookings Report:

    http://www.brookings.edu/papers/2011/1103_poverty_kneebone_nadeau_berube.aspx

    which actually takes [margin of error] into account and does a 90% confidence interval. Presenting the confidence interval seems to be the approach of the New York State Data Center network.

    *********

    It is important to note what a 5-year ACS estimate really tells you. I looked into this in great depth and the proper interpretation is not that the 5 year data are the average of 5 different years, but only represents that, on average over the entire 5 year period, this would be the poverty rate, income level, etc. That is not really precise or helpful, given the most recent recession. From the census: “all ACS estimates are period estimates and are interpreted as the average values over the full time period. For example, 2007 ACS 1-year estimates describe the average characteristics for calendar year 2007 while 2005–2007 ACS 3-year estimates describe the average characteristics for the 3-year period of 2005, 2006, and 2007.” (source: http://www.census.gov/acs/www/Downloads/handbooks/ACSResearch.pdf. )

    Of course, the 2000 SF3 data also actually had margins of error, they just weren’t easy to find and were rarely used by researchers or policy analysts. At the block group or tract level, even 2000 SF3 data had pretty large margins of error. (see: http://www.census.gov/prod/cen2000/doc/sf3.pdf Chapter 8: accuracy of the data, which shows how to calculate standard errors for census 2000 SF3 data.)

    *********

    [Many people in HUD] are struggling with the large standard errors in the ACS. For better or worse, we tend to use tracts as a proxy for neighborhoods. The push to examine block groups is all but over. We are struggling just to find an acceptable mechanism that permits us to use tracts.

    The ACS data do provide information on the level of poverty in each tract. The problem, which you highlight, is that the standard error is very high. The approach that I am using seeks to incorporate the standard errors into the analysis.

    My research examines the use of Housing Choice Vouchers for poverty deconcentration. For proper program evaluation, it is crucial to know the level of poverty in a neighborhood to determine whether an assisted household located in a desirable, low-poverty neighborhood or in a less desirable, moderate- or high-poverty neighborhood.

    Each tract is categorized into low-poverty (less than 10 percent), moderate-poverty (10 to 40 percent), or high-poverty (greater than 40 percent). Each tract is assessed a probability of being correctly categorized based upon comparing the margin of error of the poverty estimate to the difference between the tract’s level of poverty and the thresholds.

    For example, a tract has 14.4 percent poverty with a 90 percent margin of error of 13.9 percentage points. The 14.4 percent level of poverty categorizes it as moderate-poverty (i.e.: 10 < 14.4 < 40).

    This margin of error says that we are 90 percent confident that the true percent poverty is between 0.5 percent and 28.3 percent.

    Thus, there is little chance that the true level of poverty is greater than 40 percent placing it in the high category. There is a chance that the true level of poverty is less than 10 percent placing it in the low category. At issue is the probability of having placed the tract in the correct category.

    Converting the percentages to Z scores indicates that 0.5% is associated with a Z score of -1.65 (i.e.: 45 percent of the area under the normal curve is between -1.65 and zero.) The 10 percent standard is 4.4 percentage points below the estimate of 14.4 percent. Thus, the 10 percent standard is 31.7 percent [ 4.4/(14.4-0.5) ] of the distance from the mean to the -1.65 level. This means that the 10 percent standard corresponds to a Z score of -.522 ( 12.65 * .317). The Z score of .522 corresponds to 19.9 percent of the area under normal curve from the mean to the 10 percent standard. This 19.9 percent can be added to the 50 percent above the mean. The result indicates that about 30.1 percent of the area under the normal curve is to the left of the 10 percent standard and 60.9 percent are to the right.

    The example tract would be assigned a 70 percent probability of being correctly assigned to the moderate-poverty category, a 30 percent probability of being incorrectly left out of the low-poverty category, and an effectively 0 percent probability of being incorrectly left out of the high-poverty category.

    To evaluate where the household located, we will examine the locations weighted by the probability of correctly categorizing the tracts.

    This does not solve all of the problems of the large standard errors. It is very cumbersome, so it may not be the best way. I look forward to learning what others are doing.

    *****

    Personally, I seldom have any need for income or poverty data at the tract level on my own projects. Because of the nature of what I do, I usually deal with the stuff that is reliable at small geographic areas – e.g. age, sex, race, housing from the census, and additional data from other sources like taxlots, births, deaths, school enrollment, residential capacity, etc.

    For those who do need this data, my answer is that there is no guarantee that the census data were ever "usable." In an accuracy contest for income and poverty data at the tract level between the 2000 Census and the 2006-2010 ACS, the 2000 Census is likely the winner, but I would have discouraged anyone from using the long form data at the block group level from the 2000 Census. The Bureau provided formulas to compute the sampling error, but they didn't make the MOEs obvious, so people used the results as gospel.

    For example, in Old Town – Census tract 51, obviously a high poverty tract in 1999, the unweighted long form sample of occupied housing units was 259. The Summary File 1 count of households was 1,893, so the long form sample went to 13.7% of households. That is way better than the 5.2% sample of housing units in the 5 year ACS (they don't report unweighted HHs), but some or all of the income data was imputed for 34% of the households in the 2000 Census, meaning that the household may (or may not?) have responded to the census but did not fill out all of the income categories.In the 5 year ACS, only 6% of households had income imputed.

    So results from both the 2000 Census long form and the ACS are estimates. The ACS has higher sampling error, but the Census had higher non-sampling error.

    *****

    Original inquiry:

    The Census abolished the long form in 2010, so no more income, poverty, education, and employment data. One must use the American Community Survey (ACS) 5-year estimates instead.The problem is that the margins of error associated with the new ACS data render it unusable. The Census Bureau recommends against using estimates with errors larger than 10%. We looked at per capita income, for example, and found that almost 80% of the tract data and 99% of the block group data are above that threshold.

    One can use 2008 IRS data, but it’s at the zipcode level – significantly larger than tracts.

    What data sources or proxies for income at the neighborhood level are people using?

    Emily Talen, Professor
    School of Geographical Sciences and Urban Planning
    School of Sustainability
    http://geoplan.asu.edu/talen

  4. Posted November 14th, 2012 at 3:35 pm | Permalink

    Note to readers: ArtPlace has posted a detailed rebuttal to this article on its website. Definitely worth a read.

  5. John Carnwath
    Posted November 19th, 2012 at 9:34 am | Permalink

    I think most people would agree that different kinds of research serve different purposes. To name a few examples off the top of my head:

    • There is research that seeks to find out something fundamental about our universe, our planet, human nature, how societies work, etc.
    • There is research that seeks to find out and aggregate everything that is already known about a particular topic.
    • There is research that seeks to answer a specific question.
    • There is research that seeks to support an argument by rounding up all of the data that can be advanced in its favor.
    • There is research that seeks to justify past decisions.
    • There is research that seeks to document conditions at regular intervals in order to observe changes over time.
    • There is research that seeks to find out and document what is happening to create accountability and transparency.
    • There is market research that seeks to find out what consumers’ preferences are.
    • There are criminal investigations, espionage, and interrogations
    • …

    (There is certainly overlap between these types of research and this list is by no means comprehensive or intended as the basis of a rigorous typology.)

    Researchers who work in these different areas often belittle each others work, claiming that it lacks academic rigor or is out of touch with the practical problems in the field or something like that. Here, I’m going to assume that all types of research have their own place and are equally valid.

    I agree with Markusen, that we shouldn’t expect to see significant changes in the “culture, participation, physical environment and local economy … in a period of one, two, three years” as a result of creative placemaking grants. However, I am not convinced that longitudinal studies of the type Markusen calls for, which track projects over the 10 , 15, or however many years it takes them to succeed will improve our grantmaking decisions much either (and certainly not in the short term). The cause of my skepticism is that I have little reason to believe that whatever connection there is between grants for arts projects and the development of vibrant communities will remain constant over a 15-year period.

    The problem, as I see it, is that what we’re interested in in arts funding are little ripples in the surface of human expression, not deep currents in human psychology or social behavior. I don’t think that anyone at the NEA or ArtPlace is claiming that placemaking is a definitive answer to arts funding. It’s not like they’ve announced, “For 10,000 years humans have been creating art without ever really knowing why they do it, but we’ve found the answer: it’s to increase vibrancy indicators!” I could be wrong, but I’m pretty sure that if a mayor in 2062 suggests funding a project to increase the vibrancy of creative places, she or he will be ridiculed for clinging to ideas that were in vogue before the war (or before the nano-tech boom, or the great flood, or whatever big event defines the next half-century).

    I’m not saying that the idea of creative placemaking lacks substance. I think there’s a very strong argument to be made at the moment for funding the arts on the basis that they increase the vibrancy of local communities and may have other positive spillover effects. I think the NEA and ArtPlace make that case very convincingly, and that is exactly what they set out to do: make a strong argument for supporting the arts in local communities. As Carol Coletta writes, “Our interest … is in bolstering the rationale for arts investment.” And if vibrancy indicators can help make the case for arts funding for the next 10 or 15 years, that’s great.

    Just as economic impact studies helped keep the arts alive for the last 15 years (and continue to be effective) by giving arts advocates some hard figures to bring to the table, which in turn gives politicians something to point to when giving tax-dollars to the arts, it’s a reflection of the times. The major foundations’ efforts to establish big, stable elite arts institutions in the twentieth century were likewise products of their time, and I assume that the time for creative placemaking will come and go, too. At the moment it’s coming, and when it goes we’ll think of something else.

    In her post, Markusen refers to the pressure from the “federal bean counters” to adopt indicators, and I think that’s exactly the point. The ArtPlace indicators are necessary precisely to show that we understand the importance of accountability and assessment in contemporary decision-making. That doesn’t mean that our research and assessment have to be perfect. There’s good reason to believe that investing in art projects will create more enjoyable, vibrant environments to live in, and there’s little to suggest that funding the arts will have negative effects (except through gentrification, which is another story). So let’s give out some grants and have vibrant communities!

    Some of the projects will fail, some will succeed. Some would have succeeded even without an ArtPlace grant, but so what? Those who believe that supporting the “art, art making or artists at the heart of the [creative placemaking] initiative” is worthwhile in its own right will be satisfied even if the vibrancy indicators don’t rise, and there will likely be a few success stories that can be used to convince those who generally oppose arts funding that the money wasn’t entirely wasted. (Well, that’s probably an exaggeration: there will always be naysayers and a fair amount of grumbling.)

    That’s the way it works in other areas of government investment, too. You do your due diligence in selecting projects that are likely to succeed and hope for the best. You keep track of projects to demonstrate how good your decision was if they succeed, and if they fail, you try to learn from your mistakes. In the case of Solyndra, the mistake cost tax-payers 503 million dollars (that’s more than three times the NEA’s total budget for 2012). As Ron Klain, chief of staff to Vice President Biden, said at the time the decision was made to give Solyndra the loan guarantee, “The reality is that if [President Obama] visited 10 such places over the next 10 months, probably a few will be belly-up by election day 2012. But that to me is the reality of saying that we want to help promote cutting edge, new economy industries.”

    I don’t think we should expect our research on the creative placemaking to do more than make the argument for funding the programs. I seriously doubt I’ll see the day when it is determined that every 10,000 dollars invested in public art projects causes a 0.024 percent increase in the rate of employment within a half-mile radius of the artwork (or whatever the impact might be). Researchers working on education policy are having a hard enough time showing that things like teachers’ qualifications, the curriculum, and the number of hours spent in school have a significant impact on students’ test scores. And that’s an area where researchers have much larger budgets to work with, much larger data sets, and the causalities should be much more straightforward than in creative placemaking.

    In my mind, a balance needs to be struck between giving funders sufficient data to inform and justify their work and diverting too many of the resources that could be used to support projects towards research and assessment. Rather than judging the success of our research by its ability to explain causal relationships between art and community development, we should focus on the efficiency of our research and evaluation processes. How much time and money are we spending on research and assessment and how much is that research improving our funding decisions?

    I’m sure some people have strong opinions about whether Our Town’s indicators are more appropriate than the ArtPlace’s or vice versa, and how these efforts compare to other cultural mapping initiatives. The ability to recreate and independently verify results is crucial in scientific research, so that I would not want to dismiss what may appear to be a duplication of efforts offhand; however, we have to ask ourselves whether that’s really the type of research we’re doing, and at what point additional investment in research runs into diminishing returns.

    At a recent talk in Chicago, Ian David Moss called for unity among funders and researchers, essentially arguing that if we all combine forces, develop models that are sophisticated enough, and coordinate our data collection efforts, we can really get to the bottom of the role arts grantmaking plays in the formation of vibrant communities. While I agree with his call to the extent that it avoids duplicating efforts, I have my doubts about the outcomes he expects. I also wonder whether it’s advisable to be putting all of our eggs in the creative placemaking basket. As Markusen points out, any single set of indicators will produce winners and losers, and I tend to believe that the field as a whole will be better served by maintaining a diverse portfolio of funding initiatives and research projects.

  6. Posted November 20th, 2012 at 7:57 am | Permalink

    ‘Creative Placemaking’ (I even dislike the term ) is the new conservative control of culture. Important philosophical concept such as authenticity that Baudrillard wrote about have been co-opted by product oriented words like ‘vibrancy’. Real aesthetic criticism has been replaced by capitalistic tools such as ‘metrics’ and ‘indicators’. Art has been valued only if it can be measured to find good for a common denominator or shown to provide jobs and aid the public and it’s economy. When has the purpose of art ever been the economy? And who exactly is the public?
    Our NEA budget is little more than what we spend to buy one new fighter jet. One needs to simply follow the money trail to see what little of this paltry amount of money even reaches the hands of artists to be considered support.
    This isn’t about supporting the arts any longer. It’s about formal capitalistic control of culture.

  7. Posted November 22nd, 2012 at 11:11 am | Permalink

    November 22, 2012

    Happy Thanksgiving! Today, I am grateful for Rocko Landesman and the marvelous job he has done in his three plus years as head of the National Endowment for the Arts. He and his Deputy Joan Shigekawa have accomplished so much for the arts! Getting out there on the hustings and visiting American cities and towns to bring out arts leaders, visit mayors, and shine some local press light on what arts and culture do for communities. Creating the Our Town grants program, evoking excitement and new partnering at the local level, and helping arts leaders move outside the doors of our organizations and partner with local governments and others to engage in creative placemaking. Convening Foundation leaders and encouraging them to devote additional resources to creative placemaking, birthing ArtPlace. Beefing up NEA’s research department under the able leadership of Sunil Iyengar. Knocking on the doors of other Washington agencies to initiate partnerships that expand arts engagement in missions like transportation, housing and urban development, health for our military, and rural development. Chair Landesman has just announced he is stepping down, and I think we can be confident that much of what he and his team have done will continue to bear cultural fruit.

    A few responses to recent blogs, which I read with interest during a week of lively sessions on theatre and its audiences in Lisbon and Coimbra, Portugal.

    I’m grateful for John Carnwath’s long and thoughtful comments that take a long view of creative placemaking and muse on the challenges of research in this new field. I completely agree with his point: “I seriously doubt I’ll see the day when it is determined that every 10,000 dollars invested in public art projects causes a 0.024 percent increase in the rate of employment within a half-mile radius of the artwork (or whatever the impact might be).” And the very good analogy with the challenges in measuring education outcomes. Thank you! And you make a very good point about balancing actual funding of projects with spending money on research and assessment. Especially if the research measures and process are squishy.

    The ambitions of creative placemaking, as I see them, are to be more than “little ripples in the surface of human expression, not deep currents in human psychology or social behavior.” Funding arts organizations to partner with others in place can (and already is) helping to change deeply held views on the role of arts and culture in our communities. It is encouraging arts organizations to walk out of the doors of their organizations and commitment time to partnering, and to see that this broad engagement will not only make their environs better but increase their own visibility and perhaps change their programming to become more inclusive and innovative. Already, all over the US, we are seeing new coalitions forming among arts organizations, public and nonprofit, and other partners—universities and colleges, community developers, economic developers, business organizations, airports among them. Some of these will be durable, probably the biggest single change in nonprofit arts thinking and energy since the big ramp-up of NEA, philanthropic and corporate arts funding in the 1970s.

    I also take issue with Carnwath’s view of economic impact studies. In my view, these have not contributed meaningfully to increased support for the arts. They have not increased audiences–US arts participation has fallen in over the last decade, at least up through the end of the last decade when we cranked the NEA’s Survey of Public Participation in the Arts data. Economic impact studies play to a very weak argument – that spending money on the arts will result in additional new jobs and income elsewhere in the economy. The problem is, so will spending on anything else. Medicine, science and engineering don’t have to make these kinds of arguments when seeking funding—they count on the public understanding what they deliver. Arts and culture must make its bids for public and private funding on its own grounds, on the missions it serves.

    The best economic impact studies—e.g. those of William Beyers and colleagues in Seattle—do make a difference when they help local protagonists visualize the intricacy and myriad actors in a cultural economy and when they are done for public sector or other clients with real power to change things, rather than nonprofit arts advocacy groups. Beyers’ Seattle music industry studies (2004, 2008, http://www.seattle.gov/music/impactstudy.htm) were closely tied to the Seattle Mayor’s Office and helped to birth its new Seattle City of Music initiative, showcased in our Creative Placemaking white paper (www.nea.gov/pub/CreativePlacemaking-Executive-Summary.pdf; Full paper: http://www.nea.gov/pub/CreativePlacemaking-Paper.pdf). These continue to bear fruit. This month, the Seattle Chamber of Commerce announced its new City of Music program (http://www.seattlechamber.com/Advocacy/Issues/IssueDetail/City-of-Music-Partnership.aspx?tagsFilter=News:Music). The SeaTac Airport has decided to infuse its spaces with live and recorded local music (http://www.portseattle.org/Sea-Tac/Passenger-Services/Pages/Music.aspx). My portion of the forthcoming and fifth annual Otis Creative Economy Report (due out December 4, 2012) documents this synergy and shows how a Mayor, a researcher, and a young music leader launched this exceptional and still expanding creative placemaking effort, a decade in the making.

    Also, arguably the most stunning infusion of new money for arts and culture at the state and local level over the past few years resulted not from arts impact studies but from an amazing coalition spearheaded by Minnesota Citizens for the Arts Executive Director Sheila Smith working with wildlife (aka hunting) and environmental groups to design the successful Minnesota Clean Water, Land, and Legacy (aka arts and culture) Constitutional Amendment. Minnesota voters passed the Legacy Amendment in 2008, raising the state’s sales tax by three-eights of a percent for the subsequent 25 years. People organizing for a yes vote emphasized the quality of arts programming in their communities, not economic impact. Based on current projections, Minnesotans will invest more than $1.2 billion in arts and cultural heritage fund projects and programs over the 25 year life of the tax (http://www.legacy.leg.mn/funds/arts-cultural-heritage-fund). Because the state has long had a decentralized regional arts board structure (and this may have helped greatly in the electoral battle), legacy funds are broadly distributed regionally and to many small arts organizations and artists.

    And to the bean-counting argument, I agree that we should evaluate and use whatever techniques work. My point is that we do not need bad indicators. Why not spend the resources currently going into a behind-closed-doors indicator creation effort (in the case of ArtPlace) into convenings of experts and participants to share what we are really learning about creative placemaking as the funding works its way through many diverse efforts. Just to take one example, the partnering requirement for NEA’s grants asks groups who haven’t worked together before to do so. How expensive is this for them in terms of time and resources? What modes of partnering work best? Good answers to questions like these can do a lot more for future creative placemaking than the creation of indicators that poorly fit projects and places.

    Thanks to Richard Kooyman for raising concerns about the emphasis on instrumental demands on the arts – that they serve other ends like job creation rather than the intrinsic missions of arts and culture. It’s always worth revisiting the Rand study, Gifts of the Muse (http://www.rand.org/pubs/monographs/MG218) on the instrumental/intrinsic divide. I addressed this in the original blog when regretting that few if any of the indicators currently under consideration by ArtPlace, NEA or the WESTAF’s creative vitality index address arts outcomes. I highly recommend the cultural vitality indicators created by Maria Rosario Jackson and her Urban Institute colleagues over the years as worthy of inclusion in creative placemaking evaluation (http://www.urban.org/publications/311392.html). I do want to note however, that creative placemaking has generated connections among, work for, and improvements in artistic achievements for many artists in many places (Seattle musicians are an outstanding example).

    As to the ArtPlace rebuttal, I’ll save a response to that for another day.

4 Trackbacks

  1. [...] via Fuzzy Concepts, Proxy Data: Why Indicators Won’t Track Creative Placemaking Success | Createquity. [...]

  2. By Fuzzy Concepts, Proxy Data « SCOPE@UC on November 12th, 2012 at 11:56 pm

    [...] Proxy Data:  Why Indicators Won’t Track Creative Placemaking Success”  on the blog Createquity. [...]

  3. [...] competition.”  In short, the concept may seem intuitively logical, but it’s awfully difficult to demonstrate (be sure to check out the really terrific running commentary on CreatEquity where the NEA itself [...]

  4. [...] who co-authored the original paper on Creative Placemaking for the NEA, highlights this problem in an essay that she wrote for arts management hub Create Equity, questioning the movement’s early [...]

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>