(Cross-posted from the Fractured Atlas blog. This is the second in an occasional series on Fractured Atlas’s research approach and philosophy. The first can be found here.)
Many of us, especially if we’ve been present at a Rocco Landesman speech in the past year or so, are probably familiar with the quote widely attributed to W. Edwards Deming: “In God we trust; all others must bring data.” And if you’ve filled out a final report for any grant recently, you’ve probably come face to face with philanthropy’s insatiable hunger for numbers. Attendance figures, financial data, surveys—all of these and more are increasingly becoming an immovable fixture of life in the arts, often to the chagrin of fundraisers and arts administrators.
Much of the recent drive toward measurement in the nonprofit sector is being driven by a new generation of philanthropists, many coming from metrics-obsessed corporate America, who see in numbers the promise of being able to evaluate the effectiveness of their giving with the same facility as evaluating their investments in the stock market. The leaders of this so-called “smart giving” movement carry a strong distrust of anecdotal evidence (GiveWell is pretty much exhibit A for this), and privilege “hard,” rigorously collected data instead. Conveniently, they also typically focus the bulk of their attention and resources on cause areas such as education, poverty, and global health, where data is in much more ready supply.
Caught in the middle of this trend, artists frequently express discomfort with perceived attempts to translate their work into a statistic. For a field that prides itself on expressing the inexpressible, the notion of reducing a potentially life-changing experience to a number doesn’t just feel confusing, it’s kind of insulting. What’s more, fundraisers who work with individual donors often find that, by contrast, a powerful story can do wonders where facts and figures fall flat. (The same could be said for advocates and politicians.)
It’s easy to see why artists and administrators might prefer stories to data. A story is rich, full of detail and shape. Data is flat. Put another way, data is mined from the common ground between various stories, which means that in order for it to work, for it to be converted into the language of numbers, you have to exclude extraneous information. Even if that “extraneous” information happens to be really interesting and cool and sums up exactly why we do what we do!
The reason stories work for us as human beings is because they are few in number. We can spend two hours watching a documentary, or a week reading a history book, and get a really deep qualitative understanding of what was going on in a specific situation or in a specific case. The problem is that we can only truly comprehend so many stories at once. We don’t have the mental bandwidth to process the experiences of even hundreds, much less thousands or millions of subjects or occurrences. To make sense of those kinds of numbers, we need ways of simplifying and reducing the amount of information we store in each case. So what we do is we take all of those stories and we flatten them: we dry out all of the rich shape and detail that makes up their original form and we package them instead in a kind of mold: collecting a specific and limited set of attributes about each so that we can apply analysis techniques to them in batch. In a very real sense, data = mass-produced stories.
It sounds horrible when I put it like that, right? But it’s an essential process because without it, we can’t be assured that we’re looking at the whole picture. Especially when we’re dealing with a large number of potential cases or examples, if we just concentrate on those that are nearest to us, whether that proximity is measured by geography or social/professional circle or similarity to our own situation, there is a very real risk that we will draw inappropriate conclusions about examples that are a little farther afield. Either random statistical noise (especially in the case of small sample sizes) or a bias that skews the kinds of examples we seek out can contribute to this lack of precision about our conclusions.
So we gain something very significant when we flatten stories into data. At a minimum, if we’re doing it right, we gain the confidence that comes with looking at the whole picture rather than only a piece of it. At its very best, we gain the opportunity to formulate stories out of data – such as in the case of Steve Sheppard’s work on MASS MoCA and the revitalization of North Adams, MA. But we lose something too. We lose the ability to cross-reference obscure details about one of our examples with obscure details about another, and sometimes those obscure details turn out to be pretty important. We lose some of the context for understanding why data points might look the way they do, and depending on how well we’ve constructed our data, that may or may not change the conclusions we draw.
But make no mistake: stories are never incompatible with data. When you or someone you know has an incredible experience at an arts event, or when a troubled child’s life is saved through involvement with the arts, or when people are brought together who wouldn’t otherwise meet because of the arts, those are all great stories – and they’re also data. One could imagine counting the number of lives saved by the arts, scoring the quality of arts events, cataloguing the new connections and friendships made possible through arts activities. I’m not saying it’s easy to do such things, but that doesn’t mean they can’t be done meaningfully and with integrity. I think we need to challenge ourselves as a field to be more creative about how we articulate and measure the ways in which the arts improve lives. The answers that we’re looking for might be closer within our reach than we thought.