Late fall public arts funding update

DOMESTIC – FEDERAL

The National Endowment for the Arts will soon have a new Chairman. Rocco Landesman announced yesterday his plans to retire at the end of the year, in a decision widely anticipated among arts insiders. Senior Deputy Chairman Joan Shigekawa will serve as acting chair until a successor is named.

The Supreme Court will consider a case involving first sale doctrine and whether consumers have the right to resell copyrighted items bought abroad in the United States. The New York Times comes out in favor of copyright holders on this one. I’m more skeptical though – limiting consumers’ rights in this way seems like a recipe for convoluted legal restrictions and the Grey Lady offers no prescriptions for how the limitations should be put into practice. Meanwhile, the FCC is adopting new regulations pertaining to the use of wireless microphones (relevant for some large theaters and performing arts centers).

What do this month’s election results mean for copyright, anyway? Kal Raustiala and Chris Sprigman speculate that in the short term, we probably won’t see any major (successful) efforts to expand copyright protection, especially with two of Hollywood’s favorite pro-copyright Congressmen, Howard Berman and Mary Bono Mack, losing their elections. In the longer term, we may see Republicans try to pursue a libertarian line on copyright in an effort to draw a wedge between Democrats and younger voters. I’m not sure how well that would work, but there is already evidence that, among its many civil wars, there is also a generational divide among Republicans on this issue. Last week, the conservative Republican Study Committee released and then pulled a memo by 24-year-old Congressional staffer Derek Khanna advocating for meaningful copyright reform. Future of Music Coalition’s Casey Rae has more.

DOMESTIC – STATE AND LOCAL

Denver’s arts community is in an unusual situation, finding itself with $57 million in publicly approved bond money for capital construction to spend after a planned renovation of Boettcher Concert Hall failed to go through. The mayor has submitted a list of recommendations redirecting the cash to nine institutions. Denver’s not the only city with new publicly approved money for the arts: as previously reported here, the city of Portland, OR’s Regional Arts and Culture Commission will get $5.7 million a year in new arts funding because of a recently passed ballot measure. Eloise Damrosch explains how it happened.

Obviously inspired by the recent passage of a millage (property tax) to support the Detroit Institute of Arts, arts advocates in Ann Arbor put a similar millage proposal on the ballot for this month’s elections – this time to support public art. The measure would have cost taxpayers $11 per year and raised about $450,000 for public art programs, but failed to garner enough votes to pass on Election Day. Aaron Seagraves, Ann Arbor’s public art administrator who is interviewed in the article, says he doesn’t know why the measure didn’t pass, but I’m guessing the DIA’s $2.5 million advocacy campaign (and presumably the lack of same in Ann Arbor) might have had something to do with it.

INTERNATIONAL

For the second Createquity update in a row, the arts policy news from abroad ranges from inconvenient to tragic. Canada’s oldest arts advocacy group, the Canadian Conference of the Arts, is no more. The organization had received continuous funding from the federal government since 1965 which represented about three-quarters of its budget, but Canada’s conservative leaders decided to cut the cord. CCA had at times been critical of government-supported policies, though the decision appears to have been more financial than political. Advocacy is not completely dead in Canada, though – Shannon Litzenberger has a report from the all-volunteer Canadian Arts Coalition’s Arts Day festivities (similar to Americans for the Arts’s Arts Advocacy Day).

In the UK, Arts Council England is reducing its staff by 21% as part of a previously negotiated agreement with the government, and Creative Scotland is adjusting its grantmaking procedures in the face of artist concerns. Folks don’t seem to be happy with the general direction of things. In Newcastle, the local government has cut all funding to the arts for a whopping £90 million savings, losing 1300 jobs in the process – similarly drastic cuts had already happened in the cities of Somerset, Derby, and Darlington.

A very sad story comes from Bosnia-Herzegovina, where the 124-year-old National Museum is closing due to lack of funds. The museum, in Sarajevo, is the victim of a political crisis that is destabilizing the government of the young country. Several other revered cultural institutions are in danger of failing as well. Things are a little better in Greece, where Ira-Ilana Papadopoulou reports on how the economic crisis there has affected the cultural sector.

Even sadder news continues to come from the Muslim world, as news of arts-related tragedies attributable to conflict or oppression continues to pour in. In Syria, the ancient city of Aleppo is losing its cultural heritage in the wake of the civil war taking place there. In Mali, where the militant group Ansar Dine has already destroyed mosques in the ancient city of Timbuktu, Salafists are now threatening and harassing musicians. In Somalia, a popular poet, playwright and songwriter was assassinated in Mogadishu by Al Queda-aligned group Al-Shabab. Astonishingly, he is the 18th media figure killed by the group this year. Iran, thankfully, is not beset with this kind of violence, but its national orchestra is virtually dead in the face of financial difficulties and political pressure.

Share
1 Comment

Cool jobs of the month

Program Officer, Brooklyn Community Foundation

Brooklyn Community Foundation seeks a Program Officer to be responsible for two of its key field of interest funds: Arts for All and Caring Neighbors. The Program Officer will be responsible for all aspects of grant making in these areas, including initial review and research, site visits, internal presentations and post award evaluations. The Program Officer will report to the Foundation’s senior Program Director, and work closely with the President, Chief Operating Officer and other members of the foundation staff.

No deadline.

Communications Director, The Surdna Foundation

In this newly created role, the Communications Director will develop and implement communications strategies aimed at amplifying and advancing the Foundation’s funding priorities and social change ambitions. S/he will increase Surdna’s communications capacity thereby enabling the Foundation to better inform, engage and influence priority audiences and allow Surdna to use its voice more strategically. The Director will report to the Senior Director for Programs and Strategies and will work closely with the President, Program Directors, Office of Grants Management and Systems, Program Officers, and Program Associates.

No deadline.

Research and Evaluation Manager, Los Angeles County Arts Commission

The Los Angeles County Arts Commission seeks a full-time Research and Evaluation Manager to provide in-house expertise on program research, needs assessment, evaluation design and methodologies, program monitoring, and outcome measurement. This position will work alongside the Arts Commission’s programs, which include $4.1 million in annual support to over 350 small, midsize and large arts organizations; professional development for arts organizations; paid summer internships in nonprofit arts organizations for undergraduate students; leadership for the implementation of Arts for All, the County’s strategic plan to restore public arts education; programming the Ford Amphitheatre, a 1,240 seat historic outdoor amphitheatre; producing a live annual Holiday Celebration also broadcast on KCET; and integrating the work of artists into the planning, design and construction of Los Angeles County infrastructure and facilities through the Civic Art Program.

No deadline, but applications will start being reviewed December 1. Salary range is $70,000-92,000.

Also, the Technology in the Arts blog just posted this great list of arts job listing websites.

Share
1 Comment

Around the horn: Four more years edition

ART AND THE GOVERNMENT

  • As you know, there was an election last week, and Barack Obama won it. Thankfully this means that Barry Hessenius’s worst fears about the NEA likely won’t be realized, but Barry does have some useful advocacy advice that is worth a read regardless of the outcome. Ted Johnson has a helpful pre-election analysis of issues relevant to Hollywood in the election. Americans for the Arts has been active too: Jay Dick offers a post-election advocacy to-do list, and the Arts Action Fund offers a thorough roundup of the election results and their implications. Among the lesser-known developments include the fact that many moderate Republican legislators in Kansas who stood up for arts funding in that state lost their primaries to more conservative challengers; similarly, several pro-arts Republicans in Congress have either retired or lost their seats, further polarizing the parties in their orientation toward arts funding. On the plus side, two cities – Portland, OR and Austin, TX – passed pro-arts ballot measures.
  • The final version of the Chicago Cultural Plan has been released - with a new arts education plan for Chicago Public Schools to boot.

ALL ABOUT THE BENJAMINS

IN THE FIELD

  • Orchestral musician labor disputes are in the news again, and nowhere is the hotbed hotter than in freezing Minnesota, where both the Minnesota and St. Paul Chamber Orchestras face work stoppages. Eric Nilsson says neither side is fully accepting reality, and even the Minneapolis City Council is getting involved. Both groups have canceled performances through the end of 2012, and musicians are starting to look for jobs elsewhere. Meanwhile, the Spokane (WA) Symphony is on strike and canceling performances.
  • I’m intrigued by this announcement of SphinxCon, a new diversity summit organized by Sphinx, a Detroit-based organization dedicated to cultivating more musicians of color in classical music. Aaron Dworkin and company have managed to pull together a pretty incredible speaker list pairing (mostly white) arts service organization leaders with a largely non-white group of artists, academics, and other perspectives. Who knows if it’ll lead to anything, but it seems like the ingredients for a real conversation are there.

BIG IDEAS

  • I can’t emphasize enough how important this 282-word blog post from Adam Thurman is. Adam has a gift for concision, and his three-part distinction between making art, making money doing art, and making a living from art is essential for artists and policymakers alike. And speaking of Adam’s genius, this post on arts marketing (featuring the memorable quotes, “[Y]ou are probably ok with whatever you did last night.  Maybe you watched TV, maybe you read a book, maybe you got drunk and did lines of cocaine.  Whatever you did, you were ok with it.” and “The reality is that if these [new] audiences never come your way they will be fine.  You, on the other hand, will be in serious trouble.”) is well worth a read too.
  • Stephanie N. Stallings thinks jazz could use some binders full of women and speculates that hip-hop has overtaken it as America’s greatest cultural diplomacy tool.
  • Over at Next American City, Neeraj Mehta considers the “who” of creative placemaking (as in, “who benefits?”).
  • So Google’s getting into the virtual museum business now?
  • Online higher education banned in Minnesota, then reinstated.
  • Chad Bauman writes eloquently on the symbiosis between an arts community and its local newspaper – and what it means that so many of those newspapers seem to be hanging on by a thread.
  • Eric Booth submits a lengthy dispatch from the first international Teaching Artist Conference in Oslo.

RESEARCH CORNER

  • A new report from Emerson College’s Center for the Theater Commons, authored by Diane Ragsdale, examines the relationship between nonprofit and commercial theater.
  • Chorus America has released its Choral Operations Survey Report for 2012.
  • I’m looking forward to seeing the results of what looks like a very strong study being undertaken by the LA Philharmonic, USC, and Heart of Los Angeles to investigate the impact of early childhood music training. Meanwhile, a just-released report from Carnegie Hall and WolfBrown examines the potential for music to make a difference in the juvenile justice system.
  • If you’ve ever doubted me that logic models matter, check out this analysis of the difficulties faced by One Laptop Per Child, a hugely ambitious, billion-dollar initiative to develop and distribute low-cost laptops to schoolchildren in developing countries. The passage below is an eloquent depiction of how failing to think through the details of a strategy can mean its doom:

    Doing an end-run around lousy infrastructure and poorly-trained teachers might actually work with the right support to guide the child’s learning. Unfortunately, Negroponte has also stated that you actually can give a kid a laptop and walk away.

    According to Jeff Patzer, a former OLPC intern, that’s precisely what they did in Peru. Hardware degraded faster than expected, and OLPC allowed Peru to build its own branch of the system software that was incompatible with patches. Interns were not prepared to educate teachers, and teachers were not prepared to use the XO to teach students.

    “The only thing that happens is the laptops get opened, turned on, kids and teachers get frustrated by hardware and software bugs, don’t understand what to do, and promptly box them up to put back in the corner.” Patzer explained.

ETC.

  • Joe Queenan on having read more than 6000 books. My favorite part of this column is the fact that, because it’s in the Wall Street Journal, his offhand mention of Williams Sonoma is accompanied by its latest stock quote.
Share
Leave a comment

Fuzzy Concepts, Proxy Data: Why Indicators Won’t Track Creative Placemaking Success

“There is nothing worse than a sharp image of a fuzzy concept.” -Ansel Adams
Photo by beast love

(If you don’t know the name Ann Markusen, you should. As professor and director of the Project on Regional and Industrial Economics at the University of Minnesota Humphrey School of Public Affairs, Ann has become one of the most respected and senior voices in the arts research community over the past decade. Among her best-known recent efforts was her authorship, with Anne Gadwa Nicodemus, of the original Creative Placemaking white paper published by the NEA prior to the creation of the Our Town grant program and ArtPlace funder collaborative. So when she approached me to offer a guest post on evaluation challenges for creative placemaking, building on previous coverage of the topic here at Createquity, I could hardly say no. I hope you enjoy Ann’s piece and I look forward to the vigorous discussion it will no doubt spark. -IDM)

*

Creative placemaking is electrifying communities large and small around the country. Mayors, public agencies and arts organizations are finding each other and committing to new initiatives. That’s a wonderful thing, whether or not their proposals are funded by national initiatives such as the National Endowment for the Arts’s Our Town program or ArtPlace.

It’s important to learn from and improve our practices on this new and so promising terrain. But efforts based on fuzzy concepts and indicators designed to rely on data external to the funded projects are bound to disappoint. Our evaluative systems must nurture rather than discourage the marvelous moving of arts organizations, artists and arts funders out of their bunkers and into our neighborhoods as leaders, animators, and above all, exhibitors of the value of arts and culture.

In our 2010 Creative Placemaking white paper for the NEA, Anne Gadwa Nicodemus and I characterize creative placemaking as a process where “partners… shape the physical and social character of a neighborhood, town, city, or region around arts and cultural activities.” A prominent ambition, we wrote, is to “bring diverse people together to celebrate, inspire, and be inspired.”  Creative placemaking also “animates public and private spaces, rejuvenates structures and streetscapes, (and) improves local business viability and public safety,” but arts and culture are at its core. This definition suggests a number of distinctive arenas of experimentation, where the gifts of the arts are devoted to community liveliness and collaborative problem-solving and where new people participate in the arts and share their cultures.

And, indeed, Our Town and ArtPlace encourage precisely this experimental ferment. Like the case studies in Creative Placemaking, each funded project is unique in its artistic disciplines, scale, problems addressed and aspirations for its particular place. Thus, a good evaluation system will monitor the progress of each project team towards its stated goals, including revisions made along the way. NEA’s Our Town asks grant-seekers to describe how they intend to evaluate their work, and ArtPlace requires a monthly blog entry. But rather than more formally evaluate each project’s progress over time, both funders have developed and are compiling place-specific measures based on external data sources that they will use to gauge success: the Arts and Livability Indicators  in the case of the NEA, and what ArtPlace is calling its Vibrancy Indicators.

Creative placemaking funders are optimistic about these efforts and their usefulness. “Over the next year or two,” wrote Jason Schupbach, NEA’s Director of Design, last May, “we will build out this system and publish it through a website so that anyone who wants to track a project’s progress in these areas (improved local community of artists and arts organizations, increased community attachment, improved quality of life, invigorated local economies) will be able to do so, whether it is NEA-funded or not. They can simply enter the time and geography parameters relevant to their project and see for themselves.”

Over the past two years, I have been consulting with creative placemaking leaders and given talks to audiences in many cities and towns across the country and abroad. Increasingly, I am hearing distress on the part of creative placemaking practitioners about the indicator initiatives of the National Endowment for the Arts and ArtPlace. At the annual meetings of the National Alliance for Media Arts and Culture last month, my fellow Creative Placemaking panel members, all involved in one or more ArtPlace- or Our-Town-funded projects, expressed considerable anxiety and confusion about these indicators and how they are being constructed. In particular, many current grantee teams with whom I’ve spoken are baffled by the one-measure-fits-all nature of the indicators, especially in the absence of formal and case-tailored evaluation.

I’ll confess I’m an evidence gal. I fervently believe in numbers where they are a good measure of outcomes; in secondary data like Census and the National Center for Charitable Statistics where they are up to the task; in surveys where no such data exist; in case studies to illuminate the context, process, and the impacts people tangibly experience; in interviews to find out how actors make decisions and view their own performance. My own work over the past decade is riddled with examples of these practices, including appendices intended to make the methodology and data used as transparent as possible.

So I embrace the project of evaluation, but am skeptical of relying on indicators for this purpose. In pursuing a more effective course, we can learn a lot from private sector venture capital practices, the ways that foundations conduct grantee evaluations, and, for political pitfalls, defense conversion placemaking experiments of the 1990s.

 

Learning from Venture Capital and Philanthropy

How do private sector venture capital (VC) firms evaluate the enterprises they invest in? Although they target rates of return in the longer run, they not do resort to indicators based on secondary data to evaluate progress. They closely monitor their investees—small firms who often have little business experience, just as many creative placemaking teams are new to their terrain. VC firms play an active role in guiding youthful companies, giving them feedback germane to their product or service goals. They help managers evaluate their progress and bring in special expertise where needed.

Venture capital firms are patient, understanding realistic timelines. The rule of thumb is that they commit to five to seven years, though it may be less or more. Among our Creative Placemaking cases, few efforts succeeded in five years, while some took ten to fifteen years.

VC firms know that some efforts will fail. They are attentive to learning from such failures and sharing what they learn in generic form with the larger business community. Both ArtPlace and the NEA have stated their desire to learn from success and failure. Yet generic indicators, their chosen evaluation tools, are neither patient or tailored to specific project ambitions. Current Our Town and ArtPlace grant recipients worry that the 1-2 years of funding they’re getting won’t be enough to carry projects through to success or establish enough local momentum to be self-sustaining. Neither ArtPlace nor Our Town have a realistic exit strategy in place for their investments, other than “the grant period’s over, good luck!”

Hands-on guidance is not foreign to nonprofit philanthropies funding the arts.  Many arts program officers act as informal consultants and mentors to young struggling arts organizations and to mature ones facing new challenges. My study with Amanda Johnson of Artists’ Centers shows how Minnesota funders have played such roles for decades. They ask established arts executive directors to mentor new start-ups, a process that the latter praised highly as crucial to their success. The Irvine and Hewlett Foundations are currently funding California nonprofit intermediaries to help small, folk and ethnic organizations use grant monies wisely. They also pay for intermediaries across sectors (arts and culture, health, community development and so on) to meet together to learn what works best.

The NEA has hosted three webinars at which Our Town panelists talk about what they see as effective projects/proposals, a step in this direction. But these discussions are far from a systematic gathering and collating of experience from all grantees in ways that would help the cohorts learn and contact those with similar challenges.

 

The Indicator Impetus

Why are the major funders of creative placemaking staking so much on indicators rather than evaluating projects on their own aspirations and steps forward? Pressure from the Office of Management and Budget, the federal bean-counters, is one factor.  In January of 2011, President Obama signed into law the Government Performance and Modernization Act (GPRA), updating the original 1993 GPRA, and a new August 2012 Circular A11 heavily emphasizes use of performance indicators for all agencies and their programs.

As a veteran of research and policy work on scientific and engineering occupations and on industrial sectors like steel and the military industrial complex, I fear that others will perceive indicator mania as a sign of field weakness. To Ian David Moss’s provocative title “Creative Placemaking has an Outcomes Problem,” I’d reply that we’re in good company. Huge agencies of the federal government, like the National Science Foundation, the National Institutes of Health and NASA, fund experiments and exploratory development without asking that results be held up to some set of external indicators not closely related to their missions. They accept slow progress and even failure, as in cancer research or nuclear fusion, because the end goal is worthy and because we learn from failure. Evaluation by external generic indicators fails to acknowledge the experimental and ground-breaking nature of these creative-placemaking initiatives and misses an opportunity to bolster understanding of how arts and cultural missions create public value.

 

Why Indicators Will Disappoint I: Definitional Challenges

Many of the indicators charted in ArtPlace, NEA Our Town, and other exercises (e.g. WESTAF’s Creative Vitality Index) bear a tenuous relationship to the complex fabric of communities or specific creative placemaking initiatives. Terms like “vitality,” “vibrancy,” and “livability” are great examples of fuzzy concepts, a notion that I used a decade ago to critique planners and geographers’ enamoration with concepts like “world cities” and “flexible specialization.” A fuzzy concept is one that means different things to different people, but flourishes precisely because of its imprecision. It leaves one open to trenchant critiques, as in Thomas Frank’s recent pillorying of the notion of vibrancy.

Take livability, for instance, prominent in the NEA’s indicators project. One person’s quality of life can be inimical to others’. Take the young live music scene in cities: youth magnet, older resident nightmare.  Probably no worthy concept, as quality of life is, has been the subject of so many disappointing and conflicting measurement exercises.

Just what does vibrancy mean? Let’s try to unpack the term. ArtPlace’s definition: “we define vibrancy as places with an unusual scale and intensity of specific kinds of human interaction.” Pretty vague and….vibrancy are places?  Unusual scale? Scale meaning extensive, intensive? Of specific kinds? What kinds? This definition is followed by: “While we are not able to measure vibrancy directly, we believe that the measures we are assembling, taken together, will provide useful insights into the nature and location of especially vibrant places within communities.”  If I were running a college or community discussion session on this, I would put the terms “vibrancy, places, communities, measures,” and so on up on the board (so to speak), and we would undoubtedly have a spirited and inconclusive debate!

And what is the purpose of measuring vibrancy? Again from the same ArtPlace LOI: “…the purpose of our vibrancy metrics is not to pronounce some projects ‘successes’ and other projects ‘failures’ but rather to learn more about the characteristics of the projects and community context in which they take place which leads to or at least seems associated with improved places.” Even though the above description mentions “characteristics of the projects,” it’s notable that their published vibrancy indicators only measure features of place.

In fact, many of the ArtPlace and NEA indicators are roughly designed and sometime in conflict. While giving the nod to “thriving in place,” ArtPlace emphasizes the desirability of visitors in its vibrancy definition (meaning outsiders to the community); by contrast, the NEA prioritizes social cohesion and community attachment, attributes scarce in the ArtPlace definitions. For instance, ArtPlace proposes to use employment ratio—“the number of employed residents living in a particular geography (Census Block) and dividing that number by the working age persons living on that same block” as a measure of people-vibrancy. The rationale: “vibrant neighborhoods have a high fraction of their residents of working age who are employed.” Think of the large areas of new non-mixed use upscale high-rise condos where the mostly young professional people living there commute daily to jobs and nightly to bars and cafes outside the neighborhood. Not vibrant at all. But such areas would rank high using this measure.

ArtPlace links vibrancy with diversity, defined as heterogeneity of people by income, race and ethnicity. They propose “the racial and ethnic diversity index” (composition not made explicit) and “the mixed-income, middle income index” (ditto) to capture diversity. But what about age diversity? Shouldn’t we want intergenerational activity and encounters too? It is also problematic to prioritize the dilution of ethnicity in large enclaves of recent immigrant groups. Would a thriving heavily Vietnamese city or suburb be considered non-vibrant because its residents choose to live and build their cultural institutions there, facing discrimination in other housing markets? Would an ethnic neighborhood experiencing white hipster incursions be evaluated positively despite decline in its minority populations that result from lower income people being forced out?

Many of the NEA’s indicators are similarly fuzzy. As an indicator of impact on art communities and artists, its August 2012 RFP proposes median earnings for residents employed in entertainment-related industries (arts, design, entertainment, sports, and media occupations). But a very large number of people in these occupations are in sports and media fields, not the arts. The measure does not include artists who live outside the area but work there. And many artists self-report their industry as other than the one listed above, e.g. musicians work in the restaurant sector, and graphic artists work in motion pictures, publishing and so on. ArtPlace is proposing to use very similar indicators—creative industry jobs and workers in creative occupations—as measures of vibrancy.

It is troubling that neither indicator-building effort has so far demonstrated a willingness to digest and share publicly the rich, accessible, and cautionary published research that tackles many of these definitions. See for instance “Defining the Creative Economy: Industry and Occupational Approaches,” the joint effort by researchers Doug DeNatale and Greg Wassall from the New England Creative Economy Project, Randy Cohen of Americans for the Arts, and me at the Arts Economy Initiative to unpack the definitional and data challenges for measuring arts-related jobs and industries in Economic Development Quarterly.

Hopefully, we can have an engaging debate about these notions before indices are cranked out and disseminated. Heartening signs: in its August RFP, the NEA backtracks from its original plan, unveiled in a spring 2012 webinar, to contract for wholesale construction of a given set of indicators to be distributed to grantees. Instead, it is now contracting for the testing of indicator suitability by conducting twenty case studies. And just last week, the NEA issued a new RFP for developing a virtual storybook to document community outcomes, lessons learned and experiences associated with their creative placemaking projects.

 

Why Indicators Will Disappoint II: Dearth of Good Data

If definitional problems aren’t troubling enough, think about the sheer inadequacy of data sources available for creating place-specific indicators.

For more than a half-century, planning and economic development scholars have been studying places and policy interventions to judge success or failure. Yet when Anne Gadwa Nicodemus went in search of research results on decades of public housing interventions, assuming she could build on these for her evaluation of Artspace Projects’ artist live/work and studio buildings, she found that they don’t really exist.

Here are five serious operational problems confronting creative placemaking indicator construction. First, the dimensions to be measured are hard to pin down. Some of the variables proposed are quite problematic—they don’t capture universal values for all people in the community.

Take ArtPlace’s cell phone activity indicator, for instance, which will be used on nights and weekends to map where people congregate. Are places with cell activity to be judged as more successful at creative placemaking? Cell phone usage is heavily correlated with age, income and ethnicity. The older you are, the less likely you are to have a cell phone or use it much, and the more likely to rely on land-lines, which many young people do without. At the November 2012 American Collegiate Schools of Planning annual meetings, Brettany Shannon of University of Southern California presented research results from a survey of 460 LA bus riders showing low cell phone usage rates among the elderly, particularly Latinos. Among those aged 18-30, only 9% of English speakers and 15% of Spanish speakers had no cell phone, compared with 29% of English speakers over age 50 and 54% of Spanish speakers.  A cell phone activity measure is also likely to completely miss people attending jazz or classical music concerts, dramas, and religious cultural events where cell phones are turned off. And what about all those older folks who prefer to sit in coffee shops and talk to each other during the day, play leadership roles in the community through face-to-face work, or meet and engage in arts and cultural activities around religious venues? Aren’t they congregating, too?

Or take home ownership and home values, an indicator the NEA hopes to use. Hmmm… home ownership rates—and values—in the US have been falling, in large part due to overselling of homes during the housing bubble. Renting is a just as respectable an option for place lovers, especially young people, retirees, and lower-income people in general. Why would we want grantees to aspire to raise homeownership rates in their neighborhoods, especially given gentrification concerns? Home ownership does not insulate you against displacement, because as property values rise, property taxes do as well, driving out renters and homeowners alike on fixed or lower incomes. ArtPlace is developing “measures of value, which capture changes in rental and ownership values…” This reads like an invitation to gentrification, and contrary to the NEA’s aspirations for creative placemaking to support social cohesion and community attachment.

Second, most good secondary data series are not available at spatial scales corresponding to grantees’ target places. ArtPlace’s vibrancy exercise aspires to compare neighborhoods with other neighborhoods, but available data makes this task almost impossible to accomplish at highly localized scales. Some data points, like arts employment by industry, are available only down to the county level and only for more heavily populated counties because of suppression problems (and because they are lumped together with sports and media in some data sets). Good data on artists from the Census (Public Use Microdata Sample) and American Community Surveys, the only database that includes the self-employed and unemployed, can’t be broken down below PUMA (Public Use Microdata Areas) of 100,000 people that bear little relationship to real neighborhoods or city districts (see Crossover, where we mapped artists using 2000 PUMS data for the Los Angeles and Bay Area metros).

Plus, many creative placemaking efforts have ambitions to have an impact at multiple scales. Gadwa Nicodemus’s pioneering research studies, How Artist Space Matters and How Art Spaces Matter II, looked in hindsight at Artspace’s artist live/work and mixed use projects where the criteria for success varied widely between projects and for various stakeholders involved in each.  Artists, nonprofit arts organizations, and commercial enterprises (e.g. cafes) in the buildings variously hoped that the project would an impact on the regional arts community, neighborhood commercial activity and crime rates, and local property values. The research methods included surveys and interviews exploring whether the goals of the projects have been achieved in the experience of target users. Others involve complex secondary data manipulation to come up with indicators that are a good fit. Gadwa Nicodemus’s studies demonstrate how much work it is to document real impact along several dimensions, multiple spatial scales, and a long enough time periods to ensure a decent test. Her indicators, such as hedonic price indices to gauge area property value change, are sophisticated, but also very time- and skill-intensive to construct.

Third, even if you find data that address what you hope to achieve, they are unlikely be statistically significant at the scales you hope for. In our work with PUMS data from the 2000 Census, a very reliable 5% sample, we found we could not make reliable estimates of artist populations at anything near a neighborhood scale. To map the location of artists in Minneapolis, we had to carve the city into three segments based on PUMA lines, and even then, we were pushing the statistical reliability hard (Artists’ Centers, Figure 3, p. 108).

Some researchers are beginning to use the American Community Survey, a 1% sample much smaller than the decennial Census PUMS 5%, to build local indicators, heedless of this statistical reliability challenge. ArtPlace, for instance, is proposing to use ACS data to capture workers in creative occupations at the Census Tract level. See the statistical appendix to Leveraging Investments in Creativity (LINC)’s Creative Communities Artist Data User Guide  for a detailed explanation of this problem. Adding the ACS up over five years, one way of improving reliability, is problematic if you are trying to show change over a short period of time, which the creative placemaking indicators presumably aspire to do.

Fourth, charting change over time successfully is a huge challenge. ArtPlace intends to “assess the level of vibrancy of different areas within communities, and importantly, to measure changes in vibrancy over time in the communities where ArtPlace invests.” How can we expect projects that hope to change the culture, participation, physical environment and local economy to show anything in a period of one, two, three years? More ephemeral interventions may only have hard-to-measure impacts in the year that they happen, even if they catalyze spinoff activities, while the potentially clearer impact of brick-and-mortar projects may take years to materialize.

We know from our case studies and from decades of urban planning and design experience that changes in place take long periods of time. For example, Cleveland’s Gordon Square Arts District, a case study in Creative Placemaking, required at least five years for vision and conversations to translate into a feasibility study, another few years to build the streetscape and renovate the two existing shuttered theatres, and more to build the new one.

Because it’s unlikely that the data will be good enough to chart creative placemaking projects’ progress over time, we are likely to see indicators used in a very different and pernicious way – to compare places with each other in the current time period. But every creative placemaking initiative is very, very different from others, and their current rankings on these measures more apt to reflect long-time neighborhood evolution and particularities rather than the impact of their current activities. I can just see creative placemakers viewing such comparisons and throwing their hands up in the air, shouting, “but.. but…but, our circumstances are not comparable!”

One final indicator challenge. As far as I can tell, there are very few arts and cultural indicators included among the measures under consideration. Where is the mission of bringing diverse people together to celebrate, inspire, and be inspired? Shouldn’t creative placemaking advance the intrinsic values and impact of the arts? Heightened and broadened arts participation? Preserving cultural traditions? Better quality art offerings? Providing beauty, expression, and critical perspectives on our society? Are artists and arts organizations whose greatest talents lie in the arts world to be judged only on their impact outside of this core? Though arts participation is measurable, many of the these “intrinsic” outcomes are challenging data-wise, just as are many of the “instrumental’ outcomes given central place in current indicator efforts. WolfBrown now offers a website that aims to “change the conversation about the benefits of arts participation, disseminate up-to-date information on emerging practices in impact assessment, and encourage cultural organizations to embrace impact assessment as standard operating practice.”

 

The Political Dangers of Relying on Indicators

I fear three kinds of negative political responses to reliance on poorly-defined and operationalized indicators.  First, it could be off-putting to grantees and would-be grantees, including mayors, arts organizations, community development organizations and the many other partners to these projects. It could be baffling, even angering, to be served up a book of cooked indicators with very little fit to one’s project and aspirations and to be asked to make sense out of them. The NEA’s recent RFP calls for the development of a user guide with some examples, which will help. Those who have expressed concern report hearing back something like “don’t worry about it – we’re not going to hold you to any particular performance on these. They are just informational for you.” Well, but then why invest in these indicators if they aren’t going to be used for evaluation after all?!

Second, creative placemaking grants create competitors, and that means they are generating losers as well as winners.  Some who aren’t funded the first time try again, and some are sanguine and grateful that they were prompted to make the effort and form a team. But some will give up. There are interesting parallels with place-based innovations in the 1990s. The Clinton administration’s post Cold War defense conversion initiatives included the Technology Reinvestment Project, in which regional consortia competed for funds to take local military technologies into the civilian realm. As Michael Oden, Greg Bischak and Chris Evans-Klock concluded in our 1995 Rutgers study (full report available from the authors on request), the TRP failed after just a few years because Members of Congress heard from too many disgruntled constituents. In contrast, the Manufacturing Extension Partnership, begun in the same period and administered by NIST, has survived because after its first exploratory rounds, it partnered with state governments to amplify funding for technical assistance to defense contractors struggling with defense budget implosion everywhere. States, rather than projects, then competed, eager for the federal funds.

Third, and most troubling, funders may begin favoring grants to places that already look good on the indicators. Anne Gadwa Nicodemus raised this in her GIA Reader article on creative placemaking last spring. ArtPlace’s own funding criteria suggest this: “ArtPlace will favor investments… and sees its role as providing venture funding in the form of grants, seeding entrepreneurial projects that lead through the arts and already enjoy strong local buy-in and will occur at places already showing signs of momentum….” Imagine how a proposal to convert an old school in a very low income and somewhat depopulated, minority neighborhood into an artist live/work, studio and performance and learning space would stack up against a proposal to add funding to a new outreach initiative in an area already colonized by young people from elsewhere in the same city. A funder might be tempted to fund the latter, where vibrancy is already indicated, over the other, where the payoff might be much greater but farther down the road.

 

In an Ideal World, Sophisticated Models

In any particular place, changes in the proposed indicators will not be attributable to the creative placemaking intervention alone. So imagine the distress of a fundee whose indicators are moving the wrong way and which place them poorly in comparison to others. Area property values may be falling because an environmentally obnoxious plant starts up. Other projects might look great on indicators not because of their initiatives, but because another intervention, like a new light rail system or a new community-based school dramatically changes the neighborhood.

What we’d would love to have, but don’t at this point, are sophisticated causal models of creative placemaking. The models would identify the multiple actors in the target place and take into account the results of their separate actions. A funded creative placemaking project team would be just one such “actor” among several (e.g. real estate developers, private sector employers, resident associations, community development nonprofits and so on).

A good model would account for other non-arts forces at work that will interact with the various actors’ initiatives and choices. This is crucial, and the logic models proposed by Moss, Zabel and others don’t do it. Scholars of urban planning well know how tricky it is to isolate the impact of a particular intervention when there are so many others occurring simultaneously (crime prevention, community development, social services, infrastructure investments like light rail or street repaving).

Furthermore, models should be longitudinal, i.e. they will chart progress in the particular place over time, rather than comparing one place cross-sectionally with others that are quite unlikely to share the same actors, features and circumstances. If we create models that are causal, acknowledge other forces at work, and are applied over time, “we’ll be able to clearly document the critical power of arts and culture in healthy community development,” reflects Deborah Cullinan of San Francisco’s Intersection for the Arts in a followup to our NAMAC panel.

Such multivariate models, as social scientists and urban planners call them, lend themselves to careful tests of hypotheses about change. We can ask if a particular action, like the siting of an interstate highway interchange or adding a prison or being funded in a federal program like the Appalachian Regional Commission, produces more employment or higher incomes or better quality of life for its host city or neighborhood when compared with twin or comparable places, as Andrew Isserman and colleagues have done in their “quasi-experimental” work (write me for a summary of these, soon to be published).

We can also run tests to see if differentials in city and regional arts participation rates and presence of arts organizations can be explained by differences in funding, demographics, or features of local economies. My teammates and I used Cultural Data Project and National Center for Charitable Statistics data on nonprofit arts organizations in California to do this for all California cities with more than 20,000 residents. Our results, while cross-sectional, suggest that concerted arts and culture-building by local Californians over time leads to higher arts participation rates and more arts offerings than can be explained by other factors. The point is that techniques like these DO take into account other forces (positive and negative) operating in the place where creative placemaking unfolds.

 

Charting a Better Path

It’s understandable why the NEA and ArtPlace are turning to indicators. Their budgets for creative placemaking are relatively small, and they’d prefer to spend them on more programming and more places rather than on expensive, careful evaluations.  Nevertheless, designing indicators unrelated to specific funded projects seems a poor way forward. Here are some alternatives.

Commit to real evaluation. This need not be as expensive as it seems. Imagine if the NEA and ArtPlace, instead of contracting to produce one-size-fits-all indicators, were to design a three-stage evaluation process.  Grantees propose staged criteria for success and reflect on them at specified junctures. Funding is awarded on the basis of the appropriateness of this evaluative process and continued on receipt of reflections. Funders use these to give feedback to the grantee and retool their expectations if necessary, and to summarize and redesign overall creative placemaking achievements. This is more or less what many philanthropic foundations do currently and have for many years, the NEA included. Better learning is apt to emerge from this process than from a set of indicator tables and graphics.  ArtPlace is well-positioned to draw on the expertise of its member foundations in this regard.

Build cooperation among grantees to soften the edge of competition for funds. Convene grantees and would-be grantees annually to talk about success, failures, and problems. Ask successful grantees to share their experience and expertise with others who wish to try similar projects elsewhere. During Leveraging Investments in Creativity’s ten-year lifespan, it convened its creative community leaders annually and sometimes more often, resulting in tremendous cross-fertilization that boosted success. Often, what was working elsewhere turned out to be a better mission or process than what a local group had planned. Again, ArtPlace in particular could create a forum for this kind of cooperative learning. And, as mentioned, NEA’s webinars are a step in the right direction. Imagine, notes my NAMAC co-panelist Deborah Cullinan of Intersection for the Arts, if creative placemaking funders invested in cohort learning over time, with enough longevity to build relationships, share lessons, and nurture collaborations.

Finally, the National Endowment for the Arts and ArtPlace could provide technical assistance to creative placemaking grantees, as the Manufacturing Extension Partnership does for small manufacturers. Anne Gadwa Nicodemus and I continually receive phone calls from people across the country psyched to start projects but needy of information and skills on multiple fronts. There are leaders in other communities, and consultants, too, who know how creative placemaking works under diverse circumstances and who can form a loose consortium of talent: people who understand the political framework, the financial challenges, and the way to build partnerships. Artspace Projects, for instance, has recently converted over a quarter century of experience with more than two -dozen completed artist and arts-serving projects into a consultancy to help people in more places craft arts-based placemaking projects.

Wouldn’t it be wonderful if, in a few years’ time, we could say, look!  Here is the body of learning and insights we’ve compiled about creative placemaking–how to do it well, where the diverse impacts are, and how they can be documented. With indicators dominating the evaluation process at present, we are unlikely to learn what we could from these young experiments. An indicators-preoccupied evaluation process is likely to leave us disappointed, with spreadsheets and charts made quickly obsolete by changing definitions and data collection procedures. Let’s think through outcomes in a more grounded, holistic way. Let’s continue, and broaden, the conversation!

(The author would like to thank Anne Gadwa Nicodemus, Deborah Cullinan, Ian David Moss, and Jackie Hasa for thorough reads and responses to earlier drafts of this article.)

Note to readers: In addition to the comments below, the National Endowment for the Arts and ArtPlace have now published official responses to this article. Read them here:

Share
11 Comments

Science Doesn’t Have All the Answers: Should We Be Worried?

Double-blind study

“a double-blind study,” photograph by Casey Holford

On October 1 the science section of the New York Times ran two articles next to each other. One of them describes a recent study that concluded young children at play display behaviors similar to those of scientists, suggesting scientific inquiry is driven by human instinct. The other refers to the alarming extent to which that human instinct muddies scientific inquiry along the way.

Recently the scientific community has dealt with controversies cascading across many areas of research.  Most of them relate to a phenomenon known as publication bias.  Put simply, publication bias occurs when research journals prioritize studies with thought-provoking—and at the very least statistically significant—results. This makes sense; it’s hard to get excited about studies that don’t show anything conclusive. We crave good stories, stunning breakthroughs, and world-changing discoveries. Such desire has driven scientific (and artistic) innovation throughout history.

The dark underbelly of this lust for meaning, however, is something called “significance chasing.” Researchers know their chances of getting published – and advancing their professional status – hinge on getting statistically significant results.  They have a huge incentive to hunt for and read into anomalies in data – raising the possibility of over-interpreting those anomalies as due to something other than chance. An article in the journal Psychological Science illustrates this point eerily well.  As the authors point out,

It is common (and accepted practice) for researchers to explore various analytic alternatives, to search for a combination that yields ‘statistical significance,’ and then to report only what ‘worked’… This exploratory behavior is not the by-product of malicious intent, but rather the result of two factors: (a) ambiguity in how best to make these decisions and (b) the researcher’s desire to find a statistically significant result.

To compound the problem, many researchers do not openly share their full data sets or calculation methods, and have few incentives to challenge one another’s findings.  The Psychological Science article hammers the former point home with a simulated experiment that “shows” listening to a Beatles song makes you older.  That’s hooey, of course, but the authors’ point is that without stricter guidelines around how data sets are reported, nearly any relationship can be presented as statistically significant.

How big of a problem is this? In the medical community it has raised frightening questions about cancer studies that had been the basis for new treatments. It has caused an increase in the number of retractions issued in high-profile scientific journals – and a blog devoted to tracking them. And lest you think this concern is limited to the “hard” sciences, think again – it has already raised discussions of implications in humanitarian aid and in the more mainstream business community (the latter summing things up nicely with a headline, “Why You Can’t Trust Any of the Research You Read”).

Yikes.

The idea that the scientific method is easily mucked up opens up a whole host of mind-bending questions. (What if there’s a publication bias toward studies about publication bias?  Eeek…). It forces us to stop and think about the fledgling world of arts research – a world that has desperately wanted to find good, hard scientific evidence of impact for a long time. Randomized controlled trials, double-blind studies and other sophisticated research methods seemed like a holy grail, promising that if we could cleverly adapt them to meet our needs, we would have indisputable evidence of the importance of the arts, and good, hard data to guide how we direct our resources. In light of these controversies, should we question our desire to be better researchers?

No – but we should learn from others’ mistakes, and take a hard look at institutional issues common across our fields. Many of the problems the scientific community is experiencing aren’t about the tools scientists have at their disposal, but the cultures in which those tools are used. A few months ago the editors of two high-profile medical journals, Drs. Ferric Fang and Arturo Casadevall, put out a call for “structural reforms” to combat a “hypercompetitive” and “insecure” working environment they believe to be the heart of the issue. The structural flaws they identify include inadequate resources, a “leaky pipeline” of emerging talent, agenda-driven funding and administrative bloat.

Sound familiar?

The long-term implications on all research communities will unfold over time. Many of Fang and Casadevall’s recommendations are similar to those made within our own field: directing more funding toward salary support to increase job stability, streamlining grant application and reporting processes, and examining the strengths and weaknesses of peer grant review. A number of other ideas have been floated that may change established research practices. Creating a “journal of good questions” that decides which studies to publish before their results are known would reward researchers for their curiosity and the strength of their proposed methodology. Limiting the “degrees of freedom” researchers have in gathering additional data if their original data set does not yield anything “interesting” would limit significance chasing and, in theory, create a culture more tolerant of inconclusive results.

Regardless of which, if any, of these ideas stick, we need to acknowledge two things: a) our research is in all likelihood as prone, if not more prone, to these problems as the “hard sciences,” and b) the “best practices” we have been trying to emulate are not “fixed practices.” It’s often said that what arts researchers seek to measure is too squishy to fit into the traditional scientific process. If more and more people are realizing the process has a squish of its own – well then, maybe we don’t need to play “catch up” so much as try new things.

We may even come up with ideas useful to the more “established” fields we have been trying to emulate. The authors of the study in the first (less depressing) New York Times article concluded the preschoolers they observed behaved like scientists because they “form[ed] hypotheses, [ran] experiments, calculat[ed] probabilities and decipher[ed] causal relationships about the world.” I suspect that a group of arts researchers, observing the same group of children, would have interpreted those same behaviors as artistic. Human instinct drives scientific inquiry and artistic inquiry, and muddies both. Artists, one could argue, are a little more used to the mud.

Share
1 Comment

Announcing: Createquity Office Hours

As you can see from my previous post, I get around a lot these days for conferences and the like. Meanwhile, the network of Createquity Writing Fellows past and present is ever growing, and we now have representation in seven cities from coast to coast.

So we’ve decided to try out a new concept here at Createquity: Office Hours. Any time one or more Createquity writers and I are in the same city, we’re going to occupy a bar (or in this first case, a food court) and turn it into Arts Nerd Central. Ever wanted to meet any of us in person? Here’s your chance: come with your questions, ideas, requests for career advice, whatever. It will be great way for us to get to know some of our readers a little better and, more importantly, for you all to meet each other!

Our inaugural Createquity Office Hours event will be in Chicago on November 14. Aaron Andersen, one of the original Createquity Writing Fellows (and an original all around), will join me for lunch at Foodlife, a fun and creative food court downtown. If this goes well, we’ll start planning others in a city near you!

Createquity Office Hours: Chicago
Wednesday, November 14
12:00-1:30pm
Foodlife at Water Tower Place
835 North Michigan Avenue
Chicago, IL
RSVP here

Share
2 Comments

DC, Chicago and Calgary

(Quick note: Createquity offers condolences to all those affected by Hurricane Sandy. A number of artists and arts organizations were among this group, and many of them are now facing great challenges. The Chelsea art district and artist enclaves in the Red Hook area of Brooklyn, NY were hit particularly hard, and it seems a safe bet that the damage to the arts community stretches into the millions of dollars. Hyperallergic is doing a great job rounding up damage reports, mostly from the visual arts; included among these is Createquity contributor Katherine Gressel’s employer, Smack Mellon Gallery in DUMBO. Other affected groups include New Amsterdam Presents (an entrepreneurial collective of young musicians previously profiled on Createquity) and the WestBeth artist housing complex, where my aunt and uncle have an apartment. Createquity Writing Fellow Jacquelyn Strycker has a roundup of resources for artists on her personal blog, The Strycker, and here is more info from Thomas CottAmericans for the Arts and Grantmakers in the Arts.)

More travel for me coming up this month – I’m on a panel at the Future of Music Coalition Policy Summit on the 13th, giving a workshop on the untapped potential of evaluation in Chicago on the 14th, then speaking at the ArtSmarts Knowledge Exchange in Calgary on the 16th (my first work trip outside of the United States). Here are the deets:

November 12-13
Future of Music Coalition Policy Summit
New America Foundation
1899 L Street NW, Suite 400
Washington, DC
Info; event is at capacity
(I’ll be participating in a panel called “The Intersection of Data, Policy and the Arts Sector” at 3:55pm on the 13th.)

Wednesday, November 14
“Solving the Underpants Gnomes Problem: Towards an Evidence-Based Arts Policy”
part of the University of Chicago Cultural Policy Center Fall Workshop Series
DCA Storefront Theatre
68 East Randolph Street
Chicago, IL
5 – 6:30pm
Info here (scroll down)
(This is a going to be a 90-minute solo workshop with a fair amount of new content, all about the untapped potential of measurement in the arts – what we’re doing wrong, and how we can fix it. I’m excited!)

November 15-16
ArtsSmarts 2012 Knowledge Exchange
University of Calgary Downtown Campus
906 8 Avenue SW
Calgary, Alberta, CANADA
Info and registration
(I’ll be participating in a dialogue on “Cross-Border Conversations on Creative Community Development” with Shawn van Sluys of the Musagetes Foundation, moderated by Stephen Huddart of the J. W. McConnell Family Foundation. The conversation takes place on November 16 from 11am-12:30pm.)

Share
Leave a comment

Around the horn: Frankenstorm edition

ART AND THE GOVERNMENT

IN THE FIELD

  • Artists often neglect to realize that crowdfunding campaign money isn’t free – in addition to the fees you have to pay Kickstarter or one of its competitors like Indiegogo or RocketHub, the perks offered to donors often cost money as well. This handy web toy from Reuben Pressman helps you think through how much money you really need to raise if you’re thinking about starting a Kickstarter campaign (or really any crowdfunding operation).
  • Still not seeing a ton of post-recession nonprofit mergers, but here’s one in New York City: the Urban Arts Partnership has acquired the operations of the Manhattan New Music Project, which had recently won several large Department of Education grants for arts residencies for special-needs students.
  • Nina Simon takes on public voting for winners in art competitions, noting that only a small percentage of those eligible actually take the time to vote. She sees positive implications for engagement but possibly negative ones for artistic integrity; I see further evidence for the need for a hybrid approach.
  • Typical: just as games (including video games) are being touted as the next big new thing in arts circles, in the rest of the world their business model is collapsing.

ALL ABOUT THE BENJAMINS

  • Barry Hessenius has a short interview with Regina Smith, Senior Program Officer for Arts and Culture at the Kresge Foundation and Board Chair of Grantmakers in the Arts.

RESEARCH CORNER

  • Creative placemaking giant ArtPlace has been busy lately. Now accepting applications for its third round of grants (letters of inquiry are due tomorrow, November 1 UPDATE: deadline has been extended to Monday, November 5), the funding collaborative released a short thought piece detailing thirteen “principles for successful creative placemaking” in late summer.  And earlier this month, ArtPlace “soft launched” its vibrancy indicators, a research effort accompanying its two-rounds-and-counting of creative placemaking grants. While the indicators aren’t totally done yet – data points covering value creation and racial/economic diversity have yet to be fully defined or published, and a promised website showing vibrancy in various corners of the country has not yet materialized – these two documents provide the most detail available to date on ArtPlace’s efforts to understand and measure creative placemaking. Andrew Taylor and Linda Essig offer initial reviews, and stay tuned to this space for more in-depth analysis from a special guest.
  • The fall issue of the Grantmakers in the Arts Reader has a very interesting feature taking a look back at historical research studies that, in the opinion of guest editor Alexis Frasz, deserve a second look. One of the studies in question is a re-release of 1988′s “Autopsy of an Orchestra: An Analysis of the Factors Contributing to the Bankruptcy of the Oakland Symphony Orchestra Association” by Melanie Beene, Patricia Mitchell, and Fenton Johnson, now available for the first time in digital format. Each study comes with two responses, one from an “established” and one from an “emerging” grantmaker. Other studies (re)considered include Gifts of the Muse (Createquity’s take here), “Art and Culture in Communities: Unpacking Participation,” “Crossover: How Artists Build Careers Across Commercial, Nonprofit and Community Work,” and “Champions of Change: The Impact of the Arts on Learning.”
  • WolfBrown researcher Jennifer Novak-Leonard declares crowdfunding the fourth mode of arts participation (the other three being arts creation/performance, arts engagement through media, and attendance at arts events). Quoth she: “I also suggest that this information [about the relationship between crowdfunding activity and other modes of arts participation] would be valuable to each of the platforms currently helping crowd-funding grow and thrive. This is a shameless pitch to these platforms to engage in dialogue with me about how to get this research effort underway… ideally in a timeframe that would inform and expand the conversations that will begin in 2013 as we begin to see the results from the 2012 [Survey of Public Participation in the Arts].”
  • The Foundation Center’s march toward establishing a data standard for grants continues, with 15 foundations now having signed on to share their grants data publicly via the Glasspockets website. Among the arts supporters participating in the initiative are the Annenberg, Getty, Hewlett, MacArthur, and Rockefeller Foundations.
  • The UK’s Mark Robinson offers his take on the NEA’s new “system map” and research agenda, “How Art Works.”
  • Cool social network visualization here: the Seattle Band Map illustrates connections between musical acts via shared band members or project collaborations.
  • Direct mail advertising campaigns are getting a bad rap, and research shows that they’re surprisingly effective at reaching consumers, says TRG’s Will Lester.
  • William Baumol has a new book out summarizing his decades of thinking on cost disease. Joe Patti has more.
  • Back in 2001, when it started, economists would not have predicted Wikipedia’s success; nor can they really explain it now.
  • Great Q&A with Nate Silver (one of my blog heroes) about his upcoming book about forecasting. A couple of choice quotes:

    Q. When predictions involve human ‘systems’ & behavior (social, economic, political etc) that are by their very nature ‘adaptive’, how do you deal with the tricky “Heisenberg Principle” — like effect where the very act of predicting itself becomes a factor that adds information that alters the system and influences individual and/or collective behavior? -John

    A. This is a gigantic problem. In the book, we discuss how consumers, politicians, and businesses make plans based on economic forecasts that can have a host of problems. We also look at how this manifests in disease modeling. If you accurately forecast a very bad flu, it may cause people to stay home, which is good but cancels your forecast. But, the forecast served its purpose because it made people aware of their circumstances.

    and

    Q. What’s your assessment of economics as a discipline, judged in terms of its ability to make politically useful predictions? For example, can economists predict with any reliability what the economic impact of a tax cut or a government spending program will be? -Dan Schroeder

    A. The view of macroeconomic prediction in the book is pretty harsh. Economists have shown no real ability to predict a recession more than six months out. See the Wall Street Journal panel that predicted there would be no recession in December, 2007. It’s hard to measure the economy. Revisions can be as substantial as 5% in some quarters. Therefore, it is hard to predict and judge what the right policy is and what the implications of any policy are. So, we should be skeptical of anyone who predicts the impact of policy with a high degree of certainty. Humility is key.

Share
Leave a comment

Five Years of Createquity

Today, Createquity turns five years old. Huzzah!

Since my last “blogiversary” post two years ago, Createquity has come a long way. Most significant has been the introduction of the Createquity Writing Fellowship program, which has been fantastically successful at diversifying the voice of the site, generating more content for the Arts Policy Library, helping some newer arts professionals get exposure and hone their writing chops, and generally turning Createquity into more of the virtual think tank it was always supposed to be. This year, we’ve also seen a jump in audience as posts (notably “Creative Placemaking Has An Outcomes Problem“) have started to get picked up in more “mainstream” non-arts outlets such as Salon, Slate, and Fast Company, and Nina Simon sending hordes of her admiring fans our way this summer didn’t hurt either. Finally, I was tickled to see that Createquity got its first ever Wikipedia cite this year (and no, neither I nor Aaron had anything to do with it).

Here are a few Createquity facts and figures, for the stat geeks among you:

Number of subscribers: 2,794 (as of today)
Number of posts: 530, including this one
Number of words: 520,320
Number of Createquity Writing Fellows: 8 (and counting)
Most popular post: Creative Placemaking Has an Outcomes Problem, by a long shot, at nearly 10,000 page views (which doesn’t include times it was viewed in a feed reader, email, or on the homepage). The next one on the list is Katherine Gressel’s Public Art and the Challenge of Evaluation, which very briefly held the top spot before being overtaken by Creative Placemaking shortly afterwards. I’m very proud of the fact that the 2nd-most-read post all time on Createquity was written by one of our Fellows.
Proof that all y’all have longer attention spans than people give you credit for: When writing for other blogs, I often face pressure to limit my word count for fear of losing readers. But the best-read Createquity posts are also some of its longest. In fact, of the top 10 all-time, all are over 1,000 words, six are over 2,500, and two have more than 5,000 words!
Wordiest author: Excluding one-off guest posts like that of Eric Booth and Tricia Tunstall, that would be Katherine, averaging 2,796 words per post. It may surprise you to learn that I have the shortest average post length of any Createquity writer.
Most common search terms: Createquity, arts sustainability, create equity, how to solve calendar problems (because of this post)
Most bizarre search term: “some benefits to user and some developer that a rise from realising software as a moss?”

If you’d like to leave Createquity a little present for its fifth birthday, I would be much obliged if you would participate in a little “roll call” in the comments. Tell us a little about yourself and your experience with the site – when and how you first discovered Createquity, what makes you keep coming back, your favorite post or series, and most importantly, any suggestions/thoughts/ideas you may have for the future! Createquity is what it is because of you, after all, and I thank you for your continued engagement and interest after all this time.

Share
17 Comments

Artificial Intelligence and the Arts

A painting made by Harold Cohen’s computer program, AARON. Photo by Conall O’Brien

If a computer composes a symphony, should the resulting musical piece be considered a work of art? And how does a computer-generated work affect our perception of human-made works? These are not theoretical questions. A recent article in Pacific Standard highlights Simon Fraser University’s Metacreation project, which aims to investigate computational creativity, in part through the development of “artificially creative musical systems.”

This past June, three members of the project— researchers Arne Eigenfeldt, Adam Burnett and Philippe Pasquiere— presented an evaluation study of their musical works composed by software programs at the 2012 International Conference on Computational Creativity. At a public concert in which both human-composed and computer-assisted music were performed by a professional string quartet, percussionist and a Disklavier (a mechanized piano that interprets computer input), audience members were unable to differentiate between music generated by a computer and music written by a human composer, regardless of their familiarity with classical music.

The Metacreation project is not the only example of advances in artificial intelligence (AI). David Cope’s Experiments in Musical Intelligence (EMI) is a software system that analyzes existing music, and then generates original compositions in the same style. What’s more, such advances aren’t limited to musical arrangements. In 2008, the Russian publishing house Astrel SPb released True Love, a 320-page novel written in 72 hours by a computer program. And the Tate Gallery, SFMOMA and the Brooklyn Museum are among the institutions that have exhibited paintings made by AARON, an autonomous art-making program created by Harold Cohen. Indeed, computers’ capabilities now rival cognitive functions once thought to be intrinsically human. Computers can form links, evaluate, and even make novel works; they can function in ways that we think of as creative. The obvious question is, if computers are performing creatively, should we consider the resulting works art?

The simplest answer, and in many ways most appealing to the human ego, is that no, these computers are not making art. Art requires intention. This is why projects like Rirkrit Tiravanija’s Untitled 1993 (Café Deutschland)in which the artist set up a functioning cafe in a private gallery in Cologne, or Lee Mingwei’s The Living Room, in which Mr. Lee transformed a gallery into a living room and selected volunteers to act as hosts, are art; their makers intended them as such. By contrast, EMI, AARON and other AI systems have no sentient intentions to make art, or anything else. Therefore, the works they create are not art, although they could be considered as such if a human had made them. Instead, it’s the software itself that is the art, and its programmers the artists.

By this reasoning, even if the computer-generated works are, in fact, works of art, they are authored not by the computer, but by human software designers. The computer is merely a tool for making art, analogous to a brush or musical instrument. As a 2010 Pacific Standard article “The Cyborg Composer” quotes EMI’s creator David Cope:

’All the computer is is just an extension of me,’ Cope says. ‘They’re nothing but wonderfully organized shovels. I wouldn’t give credit to the shovel for digging the hole. Would you?’

Indeed, the works created by EMI, AARON and in the Metacreation project are products of the information that their programmers choose to input. “The Cyborg Composer” details Cope’s process:

This program would write music in an odd sort of way. Instead of spitting out a full score, it converses with Cope through the keyboard and mouse. He asks it a musical question, feeding in some compositions or a musical phrase. The program responds with its own musical statement. He says “yes” or “no,” and he’ll send it more information and then look at the output. The program builds what’s called an association network — certain musical statements and relationships between notes are weighted as ‘good,’ others as ‘bad.’ Eventually, the exchange produces a score, either in sections or as one long piece.

Similarly, AARON’s paintings rely on the knowledge that Harold Cohen enters. AARON’s paintings all have similar subjects—mostly people standing with plants. In an interview for PBS’s Scientific American Frontiers, Cohen explains:

AARON can make paintings of anything it knows about, but it actually knows about very little—people, potted plants and trees, simple objects like boxes and tables, decoration. From time to time I wonder whether it wouldn’t be a good idea to tell it about more, different, things, but I can never persuade myself that it would be any better for knowing how to draw a telephone, for example. So I always end up trying to make it draw better, not more.

Certainly, computers will continue to evolve as tools that artists can use. But what if computers themselves become advanced enough to design the software that is used to create paintings, sculptures, symphonies or stories? Who is the artist then? We wouldn’t give credit to Tony Smith for his daughter Kiki Smith’s drawings and sculptures. So by the same token, it wouldn’t make sense to credit a programmer for software that his program created. The computer would undeniably be the artist. However, as we create computers and software that are capable of making works that aesthetically can’t be distinguished from artworks made entirely by the human hand, these sort of works may become less appealing and desirable. The process becomes just as important as the finished work, and we’ll esteem the imperfections of the human hand. In fact, this is an extension of the craft and farm-to-table movements that are currently in vogue. Just as one might prefer a hand-stitched scarf to one that is digitally embroidered, or the misshapen heirloom tomatoes to their perfectly round supermarket counterparts, so too might an art collector choose a human-made painting over one that was computer generated. Furthermore, as computers become more capable of creating art objects, we’ll see a shift toward art that is less object-centric and more experience-centric. We’ll see more projects like those of Rirkrit Tiravanija and Lee Mingwei—participatory, interactive and socially-engaged art.

Walter Benjamin’s “The Work of Art in the Mechanical Age of Reproduction” considers the potential effects of photography and film, then new media, on the arts. In the seminal 1936 essay, Benjamin discusses the decline of the autonomous aesthetic experience resulting from the loss of “aura,” or the sense of detached authority that lies in original, one–of-a-kind works. It makes sense that the computer-generation, rather than reproduction, of art might lead to a similar loss. After all, artwork that can be created by the computer becomes less special as it becomes less obscure. If we have a tool that can generate a perfect symphony or painting, it becomes less interesting to make these things at all. Accordingly, as computational creativity advances, artists may become less concerned with creating beautiful music or paintings or objects and more concerned with making something that is not so easily produced by a series of mathematical functions.

However, what if we enter Isaac Asimov and Philip K. Dick territory, into a world where computers are not merely executing a set of algorithms, but are actually thinking in the human sense? What if we eventually create computers that possess intentionality? I subscribe to philosopher John Searle’s theory that this sort of artificial intelligence is impossible. Searle argues that computers can simulate, but not duplicate human thinking, and illustrates his contention with a thought experiment, “The Chinese Room.” Searle asks us to imagine a computer into which a Chinese speaker can input Chinese characters. By following the instructions of a software program, the computer then outputs Chinese characters that appropriately respond to what was entered, so that any Chinese speaker would be convinced that he or she was talking to another Chinese-speaking person. Searle then offers another scenario: suppose that he (an English speaker who does not speak Chinese) is in a room and is given a set of Chinese characters. He’s also given a set of instructions, in English that he follows to create responses in Chinese that will convince any Chinese speaker that he or she is conversing with the same. In fact, Searle would be following a program, just as the computer in the first scenario did, to create his responses. In his Behavioral and Brain Sciences journal article “Mind, brains, and programs” he explains that although he was able to generate intelligent responses in Chinese, he’s still unable to understand Chinese. And because Searle is merely replicating the computer in analog form, if Searle cannot understand Chinese, the computer cannot either. Therefore, although the computer may be able to look as though it holds a human level of comprehension, its actual intelligence is more superficial.

For the same reasons, Schank’s computer understands nothing of any stories, whether in Chinese, English, or whatever, since in the Chinese case the computer is me, and in the cases where the computer is not me, the computer has nothing more than I have in the case where I understand nothing.

But even if we do believe that computers with feelings are the future of science and not science fiction, we’ll have essentially created another intelligent life form, just one that is not carbon-based. At that point, these beings are less computer and more human. They may indeed have intentionality, and with that all of the emotional baggage and thought capacity that can be both a help and a hindrance when creating a masterpiece.

Share
5 Comments