(Note: over the years, I’ve gotten out of the habit of reporting live from the conferences I attend. Several factors contributed to this development, including the proliferation of other blogs in the arts management/policy space that cover the same events, the advent of Twitter and live streaming, my own life getting busier, and frankly because I feel like it’s not such an easy thing to make conference blogs “pop.” That said, for a variety of reasons, you’re going to see a lot of conference blogging on this site over the next few weeks! First up is the Grantmakers in the Arts conference, which I attended from October 8-12, followed by the one-day Beyond Dynamic Adaptability conference this past week, and finally the Independent Sector Conference in Chicago October 30-November 1. In each of these cases, I have a specific reason for my dispatches, which I’ll share in their respective post.)
*
In many ways I have Grantmakers in the Arts to thank for this blog reaching the people it does today. That’s because, in what can only be called a stroke of dumb luck, GIA Deputy Director Tommer Peterson invited me to be the organization’s first official conference blogger in 2009, which subsequently resulted in bringing Createquity to the attention of many funders who would not have otherwise discovered it. I did not hold any such honor this year, but I’ve decided to write up my thoughts anyway because I’ve since come to realize what an incredible privilege it is that I am allowed to represent Fractured Atlas at this annual gathering of funders that is otherwise closed to non-grantmakers, and I feel a sense of duty to share what I learn and observe with the rest of the field for whom such access is out of reach.
My first experience at this year’s GIA conference, subtitled “Navigating the Velocity of Change,” was the Art & Technology Preconference, which I believe (please correct me if I am wrong) is a first for GIA. Fittingly held in the heart of Silicon Valley at San Jose’s $382 million Richard Meier-designed City Hall, the preconference was the highlight of the trip for me. I was blown away by Joaquin Alvarado‘s wide-ranging opening keynote, which explored issues as diverse as the open-source ethos, participatory web projects (Popcorn, a tool to integrate text, video, and other media from anywhere on the web, and Universal Subtitles, a crowdsourcing platform for translation into foreign languages, were particularly attention-grabbing), and evolving trends in the demographics of tech-savviness. Alvarado is Senior Vice President for Digital Innovation at American Public Media/Minnesota Public Radio, and he shared the details of an intriguing model of knowledge-sharing for journalists he is working on called the Public Insight Network, as well as a “balance the budget” game his team created that has garnered some 6,000 comments from all sides of the political spectrum. Through his talk, I learned that the #1 generator of data on the planet is the United States government; that a $35 tablet computer has been released in India; that the free and open internet is fast becoming a thing of the past; that internet service providers have more unionized workers than anyone in technology; that the fastest growing segment of video gamers is women over the age of 60; that the library is where 20% of Americans get their broadband; and that there were more votes in American Idol last year than there were votes in all democratic societies combined. Whew! We also had a funny moment when we realized that no one in the room had played fantasy football, causing Alvarado to quip (referring to those in attendance), “This is not America!”
*
Another standout session from the Technology Preconference was “Supporting the Ecology of Awesomeness,” led by Awesome Foundation for the Arts and Sciences co-founder Tim Hwang. I’ve written here before about the Awesome Foundation, which is a kind of giving circle model for the 21st century sprung from the minds of irrepressible techies. Every Awesome Foundation chapter (of which there are now 29 around the world) is run by ten board members, who pool $100 each every month and award one grant to…well, pretty much anything that seems really cool. (The inaugural Awesome Foundation grant was for a giant hammock in a Boston park that could hold up to 20 people at a time.) Prospective grantees need only fill out a 10-minute online grant application, and the model is so lightweight that many chapters don’t even have bank accounts. (The 10-member limit is also interesting; Hwang moved from Boston to San Francisco and was only able to join the local Awesome Foundation board because there happened to be someone leaving the very month he moved.)
Hwang and his mates have created an infectious language around what they have created that is full of self-deprecating irony. Calling the concept “micro-funding for micro-geniuses,” Hwang noted the “Virtuous Cycle of Awesomeness” that takes place as a result of the funding opportunity receiving more attention. Each board member occupies a “Chair” that is named after the original board member to occupy that spot – so, someone could be the “3rd David Fisher Chair” of the Boston chapter, for example. When it came time to finally incorporate the Awesome Foundation as a centralized nonprofit, it wasn’t called the Awesome Foundation – it’s called the Institute on Higher Awesome Studies! They’re even planning on publishing a journal.
I love the Awesome Foundation’s low barriers to entry (it’s particularly impressive and important that they don’t arbitrarily restrict the pool of potential recipients by legal status or force them into categories that may or may not fit what they do), the lightweight and portable nature of the model, and most especially, the sense of pure fun that Awesome Foundation trustees bring to the practice of philanthropy. At the same time, the model has its downsides, and I hesitate to leap to too many conclusions regarding its applicability to the rest of our field. There’s an inherent lack of scalability within a particular locality, given the limit of ten trustees (and $12,000 total annual giving) per city. Tim mentioned that several chapters were exploring the possibility of starting another chapter in the same city, but there was no information provided on how, if at all, those chapters would coordinate to prevent duplicate applications, much less grants. Hwang and company believe that accountability is a barrier to innovation, but the absence of strong central coordination means that data collection is, understandably, haphazard, and sometimes the main organization doesn’t even know what all the other chapters are up to. Finally, although I personally love and relate to the word “awesome” and the language and ethos around it, I sometimes wonder if that’s because it resonates with certain aspects of my background – white, male, young, educated, tech-savvy – and whether it would feel a little or a lot exclusionary to people who don’t fit one or more of those descriptions. Hwang reports that the Awesome Foundation boards are gradually diversifying with the growth of the chapter network (the average age of the Florida chapter trustees apparently is far higher than that for the rest of the country), but it’s still kind of hard for me to imagine some of the attendees of the Art & Social Justice preconference being down with the Awesome Foundation. (I would love to be proven wrong on this, by the way.)
*
My experience at the main GIA conference was more mixed. A number of the sessions I was most interested in were scheduled against each other, and I was sorry not to have been able to attend several that seemed to get quite a bit of buzz, including the announcement of the NEA/Knight Foundation’s first-ever Arts Journalism Awards, the unveiling of the Irvine Foundation’s new grantmaking strategy; and the release of Holly Sidford’s controversial report on equity in arts funding for the National Committee on Responsive Philanthropy. Fortunately, those sessions were covered in depth by “official” GIA bloggers at the links above.
Three sessions I found particularly interesting were Manuel Pastor’s Monday keynote on changing demographics, an offsite session focusing on the evaluation of the San Francisco Arts Commission’s Cultural Equity Grants program, and a “video game salon” organized by Ron Ragin of the Hewlett Foundation and Marian Godfrey of the Pew Charitable Trusts. (Disclosure for those who don’t know: Ron has been my close colleague for the past several years in helping Fractured Atlas build Archipelago and the Bay Area Cultural Asset Map.)
Manuel Pastor is a remarkably engaging speaker. Clearly accustomed to the lecture format, he delivered a tour-de-force presentation on the changing demographics of California and the nation at large. Given the work I’ve been doing around California cultural geography for the past couple of years, many of his revelations (for example, that California is already a majority-minority state and that the US as a whole is headed there by 2042) were familiar to me, but even so I learned that the demographic picture is more complex than often painted. For example, people often think that the explosion in growth is primarily coming from Latino immigrants, and that used to be the case. But immigration is no longer what’s driving growth. Developing nation economies are doing better, and birth rates in those countries are going down. In the meantime, the percentage of foreign-born individuals in California is going down, and Los Angeles was the only metropolitan area in the top 100 to experience a decrease in the number of Hispanic children under the age of 18 in the past decade. Meanwhile, the share of recent immigrants to California that from Mexico was just 1 in 3, although many of the other top countries of origin were in Central America and the Caribbean. I also learned that many Hispanics insist on calling themselves their own race, even though the Census doesn’t classify them that way – in every Census since the question was first asked, approximately half of all Latinos have marked “Other” for race.
*
Although I missed Holly Sidford’s session presenting her report on equity in grant funding, I did catch her and the Helicon Collaborative team at “Cultural Equity Grantmaking: How Far Have We Come? What’s Next?” The San Francisco Arts Commission, whose Cultural Equity Grants program was the subject of the titular study, has gone through some tough times recently and as a result the evaluation is apparently suspended from public release for the moment. But we got treated to a preview of the results, which let us know that (shocker alert!) a grant program amounting to $2 per citizen per year and representing only 4% of city funding for the arts has not succeeded in achieving cultural equity. But that’s not to say it hasn’t made a difference. Funded groups reported that the grants helped leverage other funding, enabled risk-taking projects, and deepened artistic relationships. Perhaps more significantly, fully a third of the city’s Grants for the Arts funding now goes to “culturally diverse” sources, although it’s hard to know how much of this increase was influenced by the existence of the Cultural Equity Grants program and how much was due to other factors.
The discussion following the presentation posed some important and intriguing questions. Although everyone seemed to be in agreement that organizations representing non-European cultures should get a bigger piece of the funding pie as a basic tenet of fairness, the picture of what that would actually look like in practice seemed less clear. Many of the largest investments in traditional SOB (symphony, opera, ballet)-type organizations have gone to bricks-and-mortar purposes like new buildings, expansions, renovations, etc., but several in the room commented that the building of massive institutions was not necessarily a priority for organizations that would be the beneficiaries of increased funding. Another interesting strand of conversation related to the question of whether having a separate, dedicated stream of funding for diverse programming, as in the case of San Francisco’s Cultural Equity Grants program, is helpful to the cause or only serves to justify the much larger investments made in the “regular” pool. Finally, as discussion continued regarding the needs of culturally-specific organizations, I kept hearing a lot of the same themes that I hear in discussion of the needs of small to medium-size organizations in general: more general operating support, capacity building, risk capital, etc. Recognizing that I still have more to learn from than to contribute to these conversations, I was nevertheless left wondering whether culturally-specific organizations are really so specific, once you get past the content of their programming and the composition of their audiences.
*
“Don’t Get Pwnd! | A Video-gaming Salon for Grantmakers“ was a great way to round out the conference, bringing things full circle from Joaquin Alvarado’s revelations about gaming three days earlier. The session was presented by Jonathan Blow, an independent game developer, and Alice Myatt, director of the media arts program at the National Endowment for the Arts. Blow spoke first, telling of the trials and tribulations of the indie game market. Once again, I found it remarkable how a creative industry that is outside of what we typically think of as “the arts” can sound so familiar in conversation. According to Blow, creatively speaking, this is the best time in history to make a game. It’s easier than ever before to find an audience through independent distribution, and one no longer need rely on the giant game companies as a bottleneck. Yet there are challenges: intense competition for people’s time means that everything in the game matters, because your audience could lose interest at any moment. And game developer conferences are extraordinarily expensive, sometimes as much as $2,000 per person in addition to travel and lodging, shutting out those with less financial wherewithal. Ring any bells? For her part, Myatt spoke of the recent round of grant applications in which the NEA opened up the process to video game developers for the first time. Of 360 electronic media proposals, 20% were gaming-related.
The session was a veritable coronation for video games as an art form, Roger Ebert’s notorious assertion to the contrary notwithstanding. More than once, the recent Supreme Court ruling declaring video games a constitutionally-protected form of expression was mentioned, along with the fact that the same recognition was granted to film 60 years ago and literature before that. Blow noted that while everyone in America watches movies, it’s not cool to admit to playing video games – yet. But that’s bound to change soon, now that video games are now bigger business than music and film combined. Myatt opined that games need to be put into the public media pot in order to stabilize society, but complained that she rarely sees her colleagues at the video game conferences she attends (such as Games for Change). Funders outside the arts are on the case, though: next year’s Council on Foundations conference will actually have a gaming track – including a video arcade at the conference!
*
And that was that. My overall takeaway? It’s hard to generalize from my experience this year, and I am always conscious of the fact that the intellectual diet that I feed on at the conference is shaped by my own tastes. But in general, there seemed to be a real thirst for innovation that was just a bit more urgent than in previous years. The sessions that drew the most positive attention were, by and large, the boldest: the ones that dared to seriously question the status quo or chart forward a path that hasn’t been tried before. It’s as if, having been buffeted by the winds of change for three years now, funders have been convinced of the futility of fighting back. Perhaps, next year, we’ll see some folks getting out the sailboards, ready to ride this gust wherever it takes them.