Title: Do Donors Care About Results? An Analysis of Nonprofit Arts and Cultural Organizations
Author(s): Cleopatra Charles and Mirae Kim
Publisher: Public Performance and Management Review
Topics: Cultural Data Project, nonprofit success metrics, philanthropy
Methods: Analysis of attendance, website engagement, and financial data from a subset of arts and culture organizations that complete Cultural Data Profiles between 2005 and 2015
What it says: This study examines whether nonprofit arts organizations with better “performance outcomes” – defined by higher attendance numbers, audience “awareness of arts and culture activities” (measured through website visits), and “increased access to diverse audiences” (measured through the number of free tickets provided) – receive more contributions than other arts nonprofits. It also examines whether nonprofit arts organizations with lower overall fundraising costs (measured through the ratio of development expense to dollars raised) receive more contributions than organizations that spend more to bring money in the door. The authors find that foundation contributions decrease as organizations’ audiences and web viewerships grow. The impact of audience growth on individual donations is also negative but much smaller, while the impact on corporate donations is not statistically significant. The relationship between fundraising expense and donations is similarly split, but in the opposite direction: as organizations spend more per fundraised dollar, foundation giving goes up, while individual and corporate donations are not visibly impacted.
In discussing these findings, the authors conclude that certain performance outcomes for arts organizations have little to no relationship to their success with donors; in fact, “better performance outcomes in terms of increased awareness and attendance have a negative rather than a positive influence on charitable giving,” specifically related to foundations.
What I think about it: It’s tempting to get hung up on this study’s limitations – for example, its dubious use of attendance numbers and website visits as proxies for success, it’s reliance on self-reported Cultural Data Project (CDP) data, and it’s lack of a random sample. The latter point is particularly problematic because it’s difficult to glean the total sample size of the organizations analyzed. The authors note they only focus on organizations with audited financials (which excluded a whopping 52% of the overall pool) and further removed organizations with no expenses, revenue, government support, website visitors, or free tickets. Far from examining a cross-section of arts nonprofits in the United States, this study is focused on about half of organizations that submitted CDP profiles – meaning they are also located in one of the 12 states (including the District of Columbia) that have active CDP partnerships. Only one of those states (California) is located on the West Coast. None are in the South or Pacific Northwest.
Furthermore, just because such correlations exist in the CDP dataset doesn’t mean donors are making decisions based on the metrics the authors examine. There is no way to know whether foundations in the regions studied were closely following attendance rates at the organizations they funded, to say nothing of website traffic. To be fair, the authors acknowledge most of these problems, and they call for further research to better examine the relationship between organizational performance, fundraising, and donor behavior. That said, drawing firm conclusions from this study is difficult. What does seem clear is that a) foundations in the regions where CDP is used appear to behave differently from corporate and individual donors; and b) that behavior implies they are more likely to decrease their support as an organization builds a larger audience base. Similarly, foundations in those regions are more likely to respond positively to organizations that spend more money on fundraising.
What it all means: Given the unique role that foundations play in the nonprofit arts ecosystem as gatekeepers and, oftentimes, thought leaders, this study raises several intriguing questions about the extent to which they actually respond to the metrics of success to which they ask their grantees to adhere. It’s very possible that most funders eschew attendance and website data altogether, and focus on different outcomes that are tied to their own theories of change. It’s also possible (though not likely) that they are easily charmed by development officers and/or fundraising galas. Whatever the case may be, it’s worth noting that their decisions do not align with individual or corporate donors. Perhaps that’s how it should be; perhaps it indicates a flaw in how most foundations decide who to support. Without more research into the questions the authors raise – and more comprehensive datasets with which to analyze them – it’s difficult to know.