(Crossposted from the Fractured Atlas blog. This is the first in a series of posts about Fractured Atlas’s research approach and philosophy.)
I was a participant in a couple of conversations with fellow arts research nerds recently in which we discussed the notion of cause and effect. You remember that one from grade school, right? Well, it turns out that when it comes to research (and especially arts research), it’s not as simple as we all thought.
You see, in science, when we say that something caused something else, we tend to want to be sure. A common concept in statistics is “significance“: the idea that a meaningful connection exists between two variables that can’t be explained by random noise. If you’ve formed a hypothesis and designed your experiments correctly, and you get statistically significant results, you can be fairly confident that the results you’re looking at indicate something real and not merely an accident or coincidence.
Of course, it’s never possible to be entirely sure. But in some fields, you can get pretty close. The technical term for the degree of uncertainty we’re willing to tolerate in an analysis like this is the alpha: an alpha of 0.05 means that we’d like to be 95% sure that we can reject the null hypothesis (i.e., that we might just be looking at random noise) before we go ahead and report the result as meaningful. In some fields, it’s common to require an alpha of 0.001 or even lower – that is, a 99.9% certainty that random variation isn’t behind the results. And for something like testing a new drug, you most definitely want to be that sure that, for example, it doesn’t cause heart attacks as a side effect before you put that sucker on the market.
In the social sciences, though, which is where arts research generally lives, it’s much harder to be certain about your results. That’s not necessarily because it’s harder to get statistically significant results. It’s because it’s harder to design the models and experiments — in other words, your hypothesis as to what is happening and why, and the means you use to test it — with integrity.
In order for a model to work, it needs to account for everything that might affect the result that you’re looking for. For health sciences, this can be pretty simple: you give people a drug or administer them a treatment, and you measure whether they got better. You can collect other information about them, such as their race, age, gender, and so forth, in order to catch any differences along those axes, but otherwise it’s fairly straightforward. It’s also relatively routine (though sometimes complicated by ethical issues) to construct what’s known as a control group: a set of comparable individuals with the same problem who don’t receive the treatment.
In the arts, by contrast, both of these assumptions are challenged. First, when you’re looking at the impact of an arts program on things like, say, crime rates in a community, or educational outcomes for children, or hell, even just straight-up happiness for individual arts participants, it’s really difficult to isolate the unique contribution of the arts program from everything else that could be entering into the picture and affecting the results (e.g., the economy, quality parenting, or what they had to eat that day). And second, unlike in the case of giving somebody a pill, it’s hard to isolate the recipients or beneficiaries of arts programs from people who don’t benefit in a way that is scientifically rigorous – especially when the desired results concern whole communities or ecosystems.
*
It would be tempting, faced with that litany of challenges, to conclude that arts research simply isn’t worth the trouble. I would disagree with such a conclusion. Well-executed arts research, though rarely providing evidence beyond all doubt, can nevertheless help to illuminate some of the key assumptions that we make when we design arts programs. And boy, do some of those assumptionsneed illumination!
I view causality and arts research more generally through a frame of program evaluation and impact assessment. Fractured Atlas is currently undertaking an evaluation of one of its own programs and also helping an external client develop a framework to assess the impact of its grantmaking. One of the most important steps in that process is to identify the underlying assumptions upon which your program strategy rests. Every time we employ strategy to make decisions, no matter what the context, we are making assumptions. When we sign up for a test-prep course to improve our score on the GRE, for example, we assume that the instructor and materials are of a sufficient quality to help us learn, and also that we’re intellectually capable of achieving a better score with assistance. When we open a Twitter account to drive traffic to a website, we assume that we’ll have sufficient time and inspiration to generate content for the medium in a way that will gain traction over time. And when we start an organization whose goal is to bring about world peace through the arts, well, there are a LOT of assumptions that go into that one!
Where research can help us most is by telling us whether or not our assumptions are valid. We might feel more confident about our decision to sign up for the test prep class if we can first view data on how much improvement previous participants saw in their scores after taking it. Our organization’s decision to sign up for Twitter would be made easier if we had information on the trajectories of comparable peers’ tweet activity and followers over time along with measures of how much of a drain it was on staff resources. To me, research is not especially meaningful or worthwhile unless it has the potential to inform, either directly or indirectly, the decisions we make. But if it does, it can be very valuable indeed.
That’s because, unlike in health care or the pharmaceutical industry, in the arts we’re (usually) not dealing with life and death. It’s okay if we make a mistake once in a while; the world will continue on. So we don’t need to have 99.9% or even 95% certainty that the choices we make are the right ones before we move ahead. Indeed, as of now it’s likely that we make some decisions with virtually no certainty of their wisdom at all! To the extent that research can play a role in reducing the uncertainty we face in making decisions within a strategic framework, that research can provide real, quantifiable value to its users.
Let me elaborate on that last point. One of the most powerful tools I learned in business school was decision analysis, a conceptual approach useful for incorporating uncertainties into scenario planning. A common concept in decision analysis is what’s known as “the value of perfect information.” You know you have perfect information when there is absolutely no uncertainty in the outcomes that might result from an action or set of actions you take. The value of perfect information is the difference in your “expected value” (i.e., the result of the best possible strategy given the average of all possible outcomes, weighted by probability) with certainty and without. For example, if you’re only 60% sure that taking the test prep class will get you to the GRE score you need, there’s a 40% chance the amount you spend on the class will be a waste. In the language of decision analysis, that’s equivalent to saying that you can “expect” to lose 40% of your investment. With perfect information that taking the class will lead to the result you want, you have no risk of wasting that money. Thus, the value of perfect information in this case is 40% of the price of the class.
Research, especially research in the arts, can’t give us perfect information. But it can sure as hell give us better information than we already have. Even if it can reduce our uncertainty that our strategy is the right one from 40% to, say, 20%, that’s still quite a boost to our confidence. But the value of research is only as high as its quality. Badly designed or poorly executed studies can be next to useless in reducing uncertainty, or worse, can actually increase it by confusing the underlying issues. Unfortunately, no certification body currently exists to ensure the research conducted in the arts is of a sufficient quality to be helpful. The best way to make sure as a field that we don’t get taken in by low-quality work is to take some time to educate ourselves on good research practices. For a good, short primer, I recommend Evaluation Essentials by my own program evaluation teacher, Beth Osborne Daponte.
Next time, some thoughts on how Fractured Atlas puts these principles into practice.