That’s the title of a talk I presented via the University of Chicago’s Cultural Policy Center on November 14, 2012. It’s long, but I think it’s one of the more significant things I’ve done recently and hope you’ll check it out if you have some time. The actual lecture portion of the talk occupies the first 52 minutes of the video, and it starts off with a recap/synthesis of material that will be familiar to regular readers of this blog (specifically, Creative Placemaking Has an Outcomes Problem and In Defense of Logic Models). Just shy of the 27-minute mark, though, I pivot and start laying out a diagnosis of how our arts research infrastructure is failing us, a vision for how we could fix it, and why it all matters – a lot.


Since I didn’t write out the speech in advance, I don’t have a transcript for it. However, below is a reconstruction of the new material from my notes, so you can get a taste for it if you don’t have time to watch the whole thing right now. (You’ll notice I make a number of generalizations in the speech about the ways in which arts practitioners interact with research. These are based on observation and personal experience, and are best understood as my working hypotheses.)


[starting at 26:55]

Why is this integration between data and strategy important? Because research is only valuable insofar as it influences decisions. This is why logic models are awesome – they are a visual depiction of strategy. And there is no such thing as strategy without cause and effect. Think about that for a second. Our lives can be understood as a set of circumstances and decisions. We make decisions to try to improve our circumstances, and sometimes the circumstances of those around us. Every decision you make is based on a prediction, whether explicitly articulated or not, about the results of that decision. Every decision, therefore, carries with it some degree of uncertainty. This uncertainty can be expressed another way: as an assumption about the way the world works and the context in which your decision is being made. These assumptions are distinguished from known facts.

If you can reduce the uncertainty associated with your assumptions, the chances that you will make the right decision will increase. So, how do you reduce that uncertainty? Through research, of course! Studying what has happened in the past can inform what is likely to happen in the future. Studying what has happened in other contexts can inform what is likely to happen in your context. And studying what is happening now can tell you whether your assumptions seem spot on or off by a mile. Alas, research and practice in our field are frequently disconnected in problematic ways. Six issues are preventing us from reaching our potential.

Issue #1: Capacity

Supply and demand apply as much to research as it does to artists. There are far more studies out there than a normal arts professional can possibly fully process. I wish I could tell you how many research reports are published in the arts each year, but nobody knows! To establish a lower bound, I went back over last year’s [2011] “around the horn” posts, which report new research studies that I hear about. I counted at least 41 relevant arts-research-related publications – a tiny fraction, I’m sure, of total output. To make matters worse, research reports are long, and arts professionals are busy. For the Createquity Writing Fellowship program, participants are required to analyze a work of arts research for the Createquity Arts Policy Library. I collect data on how long it takes to do this, and consistently, it requires 30-80 hours to research, analyze and write just one piece! Multiply this by the number of new studies each year, and you can start to see the magnitude of the problem.

Issue #2: Dissemination

Which research reports is an arts practitioner likely to even know about? Certainly not all of them, because there is almost no meaningful connection between the academic research infrastructure and the professional arts ecosystem. Lots of research relevant to the arts is published in academic journals each year, but unless the faculty member was commissioned to do their work by a foundation, we never hear about it. Academic papers are typically behind a pay firewall, and most arts organizations don’t have journal subscriptions. To give an example, after I wrote about Richard Florida’s Rise of the Creative Class, Florida pointed me to a study in two parts by two Dutch researchers. It’s one of the best resources I’ve come across for creative class theory, but I’ve never heard anyone even mention either study other than him and me.

Issue #3: Interpretation

Research reports inevitably reflect the researcher’s voice and agenda. This is especially true of executive summaries and press releases, which is often all anyone “reads” of research “reports.” Probably the most common agenda, of course, is to convey that the researcher knows what he/she is talking about. Another common agenda is to ensure repeat business from, or at least a continuing relationship with, the client who commissioned the study. The reality, however, is that research varies widely in quality. There’s no certification process; anyone can call themselves a researcher. But even highly respected professionals can make mistakes, pursue questionable methods, or overlook obvious holes in their logic. And, in my experience, the reality of any given research effort is usually nuanced – some aspects of it are much more valuable than others. Unfortunately, many arts professionals lack expertise to properly evaluate research reports, not having had even basic statistics training.

Issue #4: Objectivity

Research is about uncovering the truth, but sometimes people don’t want to know the truth. Advocacy goals often precede research. How many times have you heard somebody say a version of the following: “We need research to back this up”? That statement suggests a kind of research study that we see all too often: one that is conducted to affirm decisions that have already been made. By contrast, when we create a logic model, we start with the end first: we identify what we are trying to achieve and only then determine the activities necessary to achieve it.

Here are a bunch of bad, but common reasons to do a research project:

  • To prove your own value.
  • To increase your organization’s prestige.
  • To advance an ideological agenda.
  • To provide political cover for a decision.

There is only one good reason to do research, and that is to try to find out something you didn’t know before.

Issue #5: Fragmentation

The worst part of the problem I just described is that it drives what research gets done – and what doesn’t get done. There is no common research agenda adopted by the entire field, which is a shame, because collective knowledge is pretty much the definition of a public good: if I increase my own knowledge, it’s very easy for me to increase your knowledge too. The practical consequences of this fragmentation are severe. It results in a concentration of research using readily available data sources (ignoring the fact that the creation of new data sources may be more valuable). It results in a concentration of research in geographies and communities that can afford it, because people don’t often pay for research that’s not about them. And it results in a concentration of research serving narrow interests: discipline-specific, organization-specific, methodology-specific. My biggest pet peeve is that research is almost never intentionally replicated – everybody’s reinventing the wheel, studying the same things over and over again in slightly different ways. A great example of a research study crying out for replication is the Arts Ripple Effect report, which I talked about earlier. The results of that study are now guiding the distribution of millions of dollars in annual arts funding. Are those results universal, or unique to the Greater Cincinnati region? We have no way to know.

Issue #6: Allocating resources

Everyone knows there’s been a trend in recent years towards more and more data collection at the level of the organization or artist. Organizations, especially small ones, complain all the time about being expected to do audience surveys, submit onerous paperwork, and so forth. And you know what, I agree with them! You might be surprised to hear me say that, but when you’re talking about organizations that have small budgets, no expertise to do this kind of work, and the funder who is requesting the information is not providing any assistance to get it…just take a risk! You make a small grant that goes bad, so what? You’re out a few thousand dollars. The sun will rise tomorrow.

As an example of what I’m talking about, I participated in a grant panel recently. I enjoyed the experience, and am glad I did it, but there’s one aspect of the experience that is relevant here. There were seven panelists, and we were all from out of town. Each of us spent, I’d say, roughly 40 hours reviewing applications in advance of the panel itself. Then we all got together for two full days in person to review these grants some more and talk about them and score them. We did this for 64 applications for up to $5,000 each, and in the end, 92% 94% were funded.

So consider this as a research exercise. The decision is who to give grants to, and how much. The data is the grant applications. The researchers are the review panel. What uncertainty is being reduced by this process? How much worse would the outcome have been if we’d just taken all the organizations, put them into Excel, run a random number generator, and distributed the dollars randomly up to $5,000 per organization? And I’m not saying this to make fun of this particular organization or single them out, because honestly it’s not uncommon to take this kind of approach to small-scale grantmaking. And yet if you compare it to ArtPlace’s first round of grants, theoretically they had thousands of projects to choose from, and they gave grants up to $1 million for creative placemaking projects – but there was no [open] review process; they just chose organizations to give grants to. So there’s a bit of a mismatch in the strategies we use to decide how to allocate resources.

There’s a concept called “expected value of information” described in a wonderful book called How to Measure Anything, by Douglas W. Hubbard. It’s a way of taking into account how much information matters to your decision-making process. In the book, Hubbard shares a couple of specific findings from his work as a consultant. He found that most variables have an information value of zero; in other words, we can study them all we want, but whatever the truth is is not going to change what we do, because they don’t matter enough in the grand scheme of things. And he also found that the things that matter the most, the kinds of things that really would change our decisions, often aren’t studied, because they’re perceived as too difficult to measure. So we need to ask ourselves how new information would actually change the decisions we make.

There is so much untapped potential in arts research. But it remains untapped because of all the issues described above. So what can we do about it?

First, we need a major field-building effort for arts research. Connecting researchers with each other through a virtual network/community of practice would help a lot. So would a centralized clearinghouse where all research can live, even if it’s behind a copyright firewall. The good news is that the National Endowment for the Arts has already been making some moves in this direction. The Endowment published a monograph a couple of months ago called “How Art Works,” the major focus of which was a so-called “system map” for the arts. But the document also had a pretty detailed research agenda for the NEA, not for the entire field, that lays out what the NEA’s Office of Research and Analysis is going to do over the next five years, and two of the items mentioned are exactly the two things I just talked about: a virtual research network and a centralized clearinghouse for arts research.

This new field that we’re building should be guided by a national research agenda that is collaboratively generated and directly tied to decisions of consequence. The missing piece from the research agenda in “How Art Works” is the tie to actual decisions. Instead it has categories, like cultural participation, and research projects can be sorted under those buckets. But it’s not enough for research to simply be about something – research should serve some purpose. What do we actually need to know in order to do our jobs better?

We should be asking researchers to spend less time generating new research and more time critically evaluating other people’s research. We need to generate lots more discussion about the research that is already produced. That’s the only way it’s going to enter the public consciousness. Each time we fail to do that, we are missing out on opportunities to increase knowledge. It will also raise our collective standards for research if we are engaging in a healthy debate about it. But realistically, in order for this to happen, field incentives are going to have to change – analyzing existing research will need to be seen as equally prestigious and worthy of funding as creating a new study. Of course, I would prefer if people are not evaluating the work of their direct competitors – but I’ll take what I can get at this point!

Every research effort should take into account the expected value of the information it will produce. Consider the risk involved in various types of grants made. What are you trying to achieve by giving out lots of small grants, if that’s what you’re doing? Maybe measure the effectiveness of the overall strategy instead of the success or failure of each grant. This is getting into hypothesis territory, but based on what I’ve seen so far I would guess that research on grant strategy is woefully underfunded, while research on the effectiveness or potential of specific grants is probably overfunded. We probably worry more than we need to about individual grants, but we don’t worry as much as we should about whether the ways in which we’re making decisions about which grants to support are the right ways to do that.

Finally, we should be open-sourcing research and working as a team. I’m talking about sharing not just finished products and final reports, but plans, data, methodologies as well. I’m talking about seeking multiple uses and potential partners at every point for the work we’re doing. This would make our work more effective by allowing us to leverage each other’s strengths – we’re not all experts at everything, after all! And it would cut down on duplicated effort and free up expensive people’s time to do work that moves the field forward.

I thank everyone for their time, and I’d love to take any questions or comments on these thoughts about the state of our research field.

  • I thoroughly enjoyed your post and agree with many of the issues outlined. Southern Methodist University just launched the National Center for Arts Research (NCAR) which will house the largest arts research database to date. We have several national data partners and are in the process of analyzing data that will benefit the arts community via a “state of the arts” report, an interactive online dashboard, and an online resource library. In the future, we will also organize symposia and invite research fellows to work with the data.

    NCAR is committed to bringing the gap between academic research and the arts professional ecosystem. We are just getting started and I invite you to learn more at our site

  • This blog entry made a great companion to Barry’s Blogathon of March 4 (! I agree that it would be nice to see some of the collaborative nature of the arts sector bleed over into the research arm of the field. As someone whose experience is mostly in higher education, I can tell you that when I wanted to replicate a study for validation as a thesis project in grad school my committee flatly told me no, that wasn’t enough to award me my degree. In graduate school my cohort and I were also encouraged to keep our distance from each other in regards to research lest we start subconsciously plagiarizing each other. However the main result of this is that I only have a vague idea of the things some of my peers are working on. It seems to me that it would only benefit the field if graduate students especially were encouraged to replicate and critically evaluate past studies. It would be an excellent way to review research as well as good training for graduate students just becoming familiar with research methods.

    To Issue #2, I’d also like to point out that many local libraries subscribe to pay services and research clearinghouses. For instance, I can access EBSCO from home (with my library card login) through my library. Google Scholar might be so obvious as to not even worth mentioning but it does make it easy to find articles that are accessible to anyone in PDF format (the “Cited by” feature can also lead to many fun rabbit holes). In the name of better research I would like to see more journals moving towards free access but in the meantime there are some pretty easy work-arounds!