When it comes to evaluating medical interventions – whether a drug is safe, or if a certain kind of exercise encourages better health – evidence and data are par for the course. Yet when it comes to interventions in the arts, our expectations are almost the opposite; if anything, we are skeptical of attempts to measure impact. But arts interventions by the government use the same real-world dollars and cents as interventions in other areas. Shouldn’t we hold government spending to a high standard of effectiveness regardless of what those policies are trying to achieve?

The UK government has been asking just this question. Drawing from the experience of the medical community’s National Institute of Health and Care Excellence (NICE) model, which systematically assesses and synthesizes the cost-effectiveness of medical interventions to help the UK National Health Service (NHS) prioritize its public spending, the UK Cabinet office has envisioned its What Works initiative as a “NICE for social policy.”

Launched in March 2013, the What Works Network set out to evaluate the performance of public policies and programs using evidence collected throughout their implementation. For every policy or program, What Works tracks which social benefits that program has achieved and how much money those benefits cost per participant. What Works evaluations aim to help policymakers and practitioners improve their decision-making process by providing evidence and advising on which interventions offered the best value for money.

Generally speaking, the What Works methodology rates a policy or program on its effectiveness according to indicators that are specific to the sector in question. For example, the success of an education policy might be rated according to how many additional months of progress a student makes in a classroom. This rating becomes even more useful to practitioners when complemented by an estimate of how much money it costs per student to implement the policy.

Now in the Network’s third year of operation, seven “What Works Centres” and two affiliate members – What Works Scotland and the Public Policy Institute for Wales – have been established thus far across the UK, each focused on a particular area of policy. They monitor and evaluate interventions according to a standardized methodology in seven categories: educational achievement, local economic growth, crime reduction, health and social care, wellbeing, improved quality of life for older people, and early intervention for at-risk children. And yes, the What Works initiative is evaluating arts interventions within the broader context of these public policy areas.

For example, the What Works Centre for Well-Being analyzes the impacts of culture and sports on wellbeing according to the same four dimensions – satisfaction with life, happiness, worthwhileness, and anxiety – used by the National Statistics of the UK to assess personal wellbeing and subjective wellbeing. Similarly, the Sutton Trust and Education Endowment Foundation’s (EEF) Teaching and Learning Toolkit provides an easy-to-read ranking on the cost-effectiveness of arts participation for improving educational outcomes for students aged 5-16, relative to other interventions.

So what does What Works say about how the arts work? A report issued by the UK government in 2014 presented a selection of early findings from the six What Works Centres that had been active up to that date. Two projects relating to the arts were included. One project from the Centre for Local Economic Growth examined 36 evaluations covering the impact of major sport and culture projects on the local economy and found that the overall measurable impacts were rare, and small if they existed at all. Built facilities, however – with sporting facilities comprising the vast bulk of the evidence – might increase the value of properties in their immediate vicinity. The Sutton Trust-EEF Teaching and Learning Toolkit found that arts participation had a low positive impact on student education attainment (defined as “additional months progress you might expect pupils to make as a result of an approach being used in school, taking average pupil progress over a year as a benchmark”), but for much lower cost compared to some other learning interventions with similar impact, such as attending summer school or using teaching assistants. The Teaching & Learning Toolkit collects impact evidence from EEF projects covering 34 education topics, and impact results are regularly updated and summarized as they are collected.

What Works’s venture into evidence-based policy was still in its infancy at the time of the report’s publication. Since then, several new arts-related projects have been commissioned and evaluated. An “Act, Sing, Play” project sought to answer whether exposure to high quality music education was more cost-effective than drama participation for improving students’ literacy and math scores, and another “SHINE on Manchester” project assessed to what extent Saturday music education improved students’ literacy and math scores. While assessments of these two projects did not yield convincing evidence that participation in the arts helped achieve the designated outcomes of improved literacy and math scores, they also did not discount the possibility that arts participation might yield other positive outcomes.

Is evidence-based evaluation of public policy the wave of the future? In the United States, a loose alliance of several organizations would like to make it so. Results for America aims to spearhead smart policy changes at all government levels by encouraging the use of best available data, evidence and evaluation about what’s effective. The Laura and John Arnold Foundation (LJAF) is likewise active in this area, having recently absorbed the Coalition for Evidence-Based Policy, a Washington think tank that had success in advocating for government reforms, into its grantmaking. At this very moment, LJAF is holding a $15 million competition to encourage government and nonprofit organizations to implement highly effective programs, and Results for America just launched a global initiative “identifying the policies, programs and systems that governments are using to support the production and use of data and evidence” called Results for All.

In theory, this approach of collecting, synthesizing, and ranking evidence from a diverse range of policy and program evaluations will help make that evidence accessible to a wide audience – and that is undoubtedly a good thing. At the same time, paring down the impacts of policies and programs to cost-effectiveness might be challenging when goals are less readily quantifiable, or where effectiveness needs to be assessed according to more innovative or perhaps even abstract criteria. In such cases, less relevant targets might become more appealing to policymakers because they are cheaper or easier to tag with numbers, resulting in an oversimplified framework for measuring impact that displaces a true understanding of effectiveness. Arts and cultural policies arguably are particularly vulnerable to this risk, particularly given that we are only beginning to understand the true nature of their value to individuals and society. At Createquity, we don’t think it is impossible for the benefits of the arts to be assessed under a What-Works-style evaluation framework, but we do have to be careful that we are attempting to measure the right things – the things that arts are actually good for.

The relatively small and weak body of information and data on the impacts of arts/culture policies and programs shows that there are significant gaps and limitations – but also much room to grow – for What Works’s assessments of arts interventions going forward. In the meantime, we can do our part to contribute evidence to What Works inquiries by submitting tips, research and assessments of public policies to the relevant Centres. As of publication, EEF is seeking new education interventions to fund and evaluate, and welcomes applications from “projects that show promising evidence of having a measurable impact on attainment or a directly related outcome” until December 9.


Cover image: “Mathematica” courtesy of the Ivan T. via Flickr Creative Commons license.

  • Susan

    What’s happened to all the data collected by the CDP (now DataArts, I think)? Does anybody look at it? Does it collect the right information? Funders (some) request it as part of a grant application, but do they find it helpful? My guess is that some ask for it mostly to reassure themselves that the organization is collecting and reporting data using a standardized method, but never actually look at it — in part because the funder is still finds it most useful to have data reported in a form unique to that funder. And of course, as you allude to in this post, once we’ve collected the data, what the heck do we do with it?

    • http://www.createquity.com Ian David Moss

      Susan, the type of evidence used by the What Works initiative is very different from what’s collected through DataArts’s Cultural Data Profile (CDP). What Works uses formal evaluations and research studies that measure the connection between various kinds of arts interventions or programs and specific outcomes like educational attainment or economic development. The data collection for these evaluations tends to be much more in-depth and ambitious than what’s collected through the CDP, but on the plus side you only need a few of them to establish an evidence base (i.e., you don’t need to conduct an evaluation on every organization that offers arts education programming to get a general idea of whether arts education programming is effective). I’m not able to answer all your questions about how funders use the CDP, but felt it was important to distinguish it from What Works.

      • Susan

        Thanks, Ian. That is very helpful.