In Defense of Logic Models

Photo by 707d3k

Photo by 707d3k

Last month, my post Creative Placemaking Has an Outcomes Problem generated a lot of discussion about creative placemaking and grantmaking strategy, much of it really great. If you haven’t had a chance, please check out these thoughtful and substantive responses by—just to name a few—Richard Layman, Niels Strandskov, Seth Beattie, Lance Olson, Andrew Taylor, Diane Ragsdale, Laura Zabel, and most recently, ArtPlace itself. I’m immensely grateful for the seriousness with which these and other readers have taken my critique, and their questions and suggestions for further reading have been tremendously illuminating.

Now that the talk has subsided a bit, it’s a good opportunity to clarify and elaborate upon some of the subtler points that I was trying to make in that piece, which definitely left a bit of room for people to read in their own interpretations.  So, just to be clear: despite the provocative title, I wasn’t trying to slam the practice of creative placemaking itself, nor call it into question as a focus area for policy and philanthropy in general. As I wrote in response to one of the comments on the original post, I believe strongly in the power of the arts to have a role in revitalizing communities, and I view the desire to direct resources toward bringing such efforts to life as a very positive impulse on the part of funders and policymakers. Furthermore, although I agree with the point made by John Shibley and others that the arts may not be the best way to foment economic development, no one said that cities and regions can only use one strategy. Economic development is a complex beast, and intuition and common sense would hold that there are most likely some specific situations in which the arts can have a real, irreplaceable, and catalytic impact.

My critique is really about how we don’t have much information about what those situations are – nor about how infusions of philanthropic capital can make a difference in those situations. What’s more, I am not confident that the tools we’re currently developing, as useful as they may be for other purposes, will get us there on their own. My contention is that logic models and their conceptual cousins, theories of change, can be useful tools in filling this gap – by forcing us to articulate our assumptions about the way the world works, and by providing a framework that we can use to test those assumptions. The problem is that most of the logic models that I see aren’t worked out to the level of detail that I believe is necessary to gain really useful information about the dynamics of these complex processes. In my post, I provided a couple of examples of theories that, while surely far from perfect, at least attempt to recognize some of the numerous and interlocking assumptions embedded in grantmaking of the kind engaged in by today’s funders supporting creative placemaking.

It’s clear from some of the responses, however, that not everybody shares my optimism in the utility of logic models. Laura Zabel writes that she “hates” them for being too reductive. Diane Ragsdale, taking a cynical view, is worried that they may be misused by funders in order to make themselves seem smarter than they really are or blame grantees for failed strategies. ArtPlace’s response suggests that logic models raise the bar for research too high, and that because proving a causal connection between these investments and the change they produce (or don’t) is so difficult, we’re better off not trying. While I can sympathize with each of these critiques, I also think that they give logic models a bad rap. I feel that logic models are a tool of tremendous power whose potential is only beginning to be unlocked. It’s true that, just like philanthropy and policy, logic models can be done very badly. But that doesn’t mean there’s no gain for us in trying to do them well.

Before we get into all that, however, I’m guessing that some of you probably could use a refresher course on logic models and the terminology associated with them, which can be quite confusing. So let’s start with a little background on what this is all about.

What the Hell Is a Logic Model, Anyway?

Simply put, a logic model is a method of describing and visualizing a strategy. Logic models have their conceptual origin in the “logical framework approach” originally developed for USAID in 1969 by Leon J. Rosenberg of Fry Consultants. Their use was largely concentrated in the international aid arena until 1996, when United Way of America published a manual on program outcome measurement and encouraged its hundreds of local agencies and thousands of grantees to adopt logic models as a matter of course. Since then, large private funders such as the Kellogg and Hewlett Foundations have integrated logic models into their program design and execution, and the concept is commonly taught in graduate programs in public policy, urban planning, and beyond.

Even though logic models have achieved greater adoption over the past several decades, there is little standardization in the content, format, and level of ambition seen in professionally produced logic models for institutions large and small. Worse, everyone seems to want to come up with their own terms to describe features of the logic model, and as a result, you’ll notice a lot of variation in language as well. Below, I’ll do my best to isolate the elements that most of these efforts have in common.

Nearly all logic models contain the following fundamental elements. In combination, they describe a linear, causal pathway between programs or policy interventions and an aspirational end-state.

  • Activities are actions or strategies undertaken by the organization that is the subject of the logic model. These activities usually take place in the context of ongoing programs, although they can also be one-time projects, special initiatives, or policies such as legislation or regulation.
  • Outputs refer to observable indications that the above activities are being implemented correctly and as designed.
  • Outcomes are the desired short-, medium-, or long-term results of successful program or policy implementation.
  • Impacts (or Goals) represent the highest purpose of the program, policy, or agency that is the subject of the logic model. Sometimes you’ll find these lumped in with Outcomes.
Logic model for a bicycle helmet public information campaign

Logic model for a bicycle helmet public information campaign, courtesy RUSH Project

Many realizations of logic models combine these essential elements with additional information that provides contextual background for this causal pathway. Several of these supplemental concepts are listed below in approximate order from most common to most obscure.

  • Measures (or Indicators) for outputs, outcomes, and impacts are concrete, usually quantitative data points that shed light on the degree to which each result has been achieved.
  • Inputs are resources available to the program or organization in accomplishing its goals.
  • Assumptions are preconditions upon which the model rests. If one or more assumptions proves unsound, the integrity of the model may be threatened.
  • Benchmarks extend the concept of measures to incorporate specific target goals (so, not just “# of petitions delivered to Congress” but “50,000 petitions delivered to Congress”).
  • Target Population refers to the audience(s) for the activities listed in the logic model.
  • Influential Factors are variables or circumstances that exist in the broader environment and could affect the performance of the strategy as designed (e.g., an upcoming election cycle whose outcome might change the underlying landscape in which the program operates).

What About Theories of Change?

A frequently-employed alternative logic model approach strips out this latter set of contextual elements and instead aims to visualize the linear causal chain at a finer grain of detail. This version of a logic model is typically referred to as a theory of change (or, sometimes, a program theory). A well-executed theory of change diagram “unpacks” the processes and factors that lead to successful outcomes, exposing relationships between isolated variables that can then become the subject of research or evaluation.

Partial theory of change from Project SuperWomen case study (ActKnowledge/Aspen Institute Roundtable on Community Change

Partial theory of change from Project SuperWomen case study (ActKnowledge/Aspen Institute Roundtable on Community Change)

Sometimes logic models and theories of change are presented as distinct concepts, while other times they really refer to the same thing. This is because logic models and theories of change evolved out of distinct communities of practice, but the philanthropic field has not always respected the distinction in the terminology it’s adopted to describe these tools. In my own practice I prefer to use theories of change, but for the sake of simplicity and readability, in the rest of this article I’m going to use the term “logic model” inclusively to refer to any diagram that clearly shows some combination of activities and outcomes, regardless of what other elements it may include or the visual approach it takes.

*

OK, now that we have our definitions in order, we can start talking about what makes logic models so awesome.

Awesome #1: Logic Models Describe What’s Already Going On in Your Head

So, here’s the thing: the core questions involved in creating any logic model—What am I trying to do? Why am I trying to do it? How will I know if I’ve succeeded or not?—represent the very essence of strategy. As a rabbi might say, “the rest is commentary.” If you have a strong sense of what the answers to these questions are, then you have a logic model in your head whether you realize it or not. All the diagram does is make it explicit.

To illustrate this, we can look at a simple example. Let’s say I decide I’m done with this whole “arts” thing and I want to go to law school instead. I know, though, that in order to get into a good law school I need to get a good score on the LSAT. So, how can I make sure I get a good score? Intuitively, I decide that taking a test prep class is the way to go.

Why do I think taking a test prep class is a way to increase my score on the LSAT? Well, if my score isn’t as high as it could be, it’s probably due to some combination of two factors. First, I may not know the material well enough. So, if the class helps me to learn how to answer the test questions better, I’ll likely perform better on the test. Second, there may be a psychological factor as well. If I’m someone who gets nervous on tests, then my performance on them may suffer. The practice exams and deep engagement with the material that comes with a class could help me to get more comfortable with the idea of the LSAT and make it seem less intimidating, thus improving my performance.

Seems logical enough, right? And voila, it lends itself quite easily to a logic model:

Sample program theory

The truth is that any decision you make, if it has any element of intentionality at all, can be diagrammed as a logic model. You might hate logic models with every fiber of your being and think they’re the stupidest thing ever created, but I’m telling you right now: if you believe in strategy, then you believe in logic models.

Awesome #2: Logic Models Are Incredibly Flexible

Now, there’s a difference between having a logic model in your head and having a good logic model in your head. The example above is simple, but it’s limited by that simplicity. It doesn’t explain why I might have decided to go to law school, or explore other ways that I could get into the school I want besides increasing my test score. In short, it’s pretty much just a straight-up mapping of a decision already made.

The best logic models don’t do that. Instead, they proceed with the end in mind (what is the goal we want to achieve?) and then methodically work backwards to understand what activities or strategies would be most appropriate to achieve that end. The ultimate outcome of this exercise may be a very different set of strategies than the ones you were originally contemplating or the programs you already have in place! Because of that, the logic model creation process can be great for opening up new ways of thinking about old problems or longstanding dreams, as well as clarifying what’s really important to you and/or your organization.

I mentioned earlier that not everyone is a fan of logic models. Here’s what Laura Zabel had to say about them in her post responding to mine:

I hate logic models. For me they are, somehow, simultaneously too reductive and too complex. Too simple, too linear for how I think the world works and too dry, too chart-y for how beautiful the world is. They make me irrationally grumpy.

Arlene Goldbard, in a 2010 essay, is similarly grumpy about logic models:

[R]equiring one of these charts as part of a grant proposal bears about as much real relationship to community organizations’ work as would asking each to weave a placemat…[T]he task of boiling the answers down to colored bars often wastes days, compressing most of the useful meaning out of the inquiry.

I can’t speak to Laura’s and Arlene’s experiences directly, but I know they are not uncommon. Unfortunately, logic models that are rife with imprecision, questionable assumptions, and inappropriate associations are more frustrating to work with than no logic model at all—and it’s not as easy as it looks to create logic models that are free of these flaws. Such problems are magnified when logic models are treated as edicts sent down from on high rather than the learning, living documents that they are intended to be.

In her post, Laura presents an alternative formulation of a logic model that describes her theory of change for creative placemaking: artists + love + authenticity -> creative placemaking. While I’d classify this as more of a definition of creative placemaking than a logic model, it goes a long way toward illustrating my point that we all have latent logic models in our head that are just waiting to be expressed as such. Laura writes, “there’s no logic model in the world that can capture how a crazy parade [the annual MayDay parade in Minneapolis] can restore my faith in humanity.” I couldn’t disagree more – in fact, I made one, relying solely on the way Laura describes the parade in her post. Here you go:

Laura Zabel's Faith in Humanity

For the past few months, I’ve been researching impact assessment methods used across the social sector in connection with some evaluation work we’re doing here at Fractured Atlas. Seemingly every year, someone comes up with a new way of evaluating impact, whether it’s for social purpose investing, choosing grants, or measuring externalities. I’m not done yet, but what I’ve found so far has only reinforced my appreciation for the logic model. The beauty of logic models is that, because they relate so directly to the fundamental elements of strategy, they are endlessly adaptable to almost any situation. I actually find it kind of funny when people call logic models too rigid, given the alternatives – especially considering how much of our lives is ruled by the granddaddy of rigid, one-dimensional success metrics: money.

Awesome #3: Logic Models Are a Victory for Transparency

Hewlett Foundation Performing Arts Program strategic framework

Hewlett Foundation Performing Arts Program strategic framework

One of the really powerful things about downloading the implicit strategy that exists in your head into a diagram is that it confronts you with gaps that may be present in that strategy and allows you to try and work through them. The Underpants Gnomes example that I used in my creative placemaking post is a great illustration of this. The Gnomes clearly felt that stealing underpants would lead to profit, but hadn’t clearly thought through the fuzzy middle part of the scheme. The Underpants Gnomes might seem like a fanciful exaggeration of the problem I’m talking about, but I would argue that there’s been more than one arts organization (and funding initiative) started with a similar lack of congruity between the proposed activities and intended results.

Diane Ragsdale is skeptical that logic models can serve this function, suggesting that funders might misuse logic models and turn them against their grantees. First, I think it’s important to make a distinction here: Diane and Arlene and Laura are all talking about logic models at the level of the individual organization/project. While I think these can be helpful, in my mind the most important logic model is the one for the funder itself. This is admittedly a rarer practice, but several foundations – such as FordHewlett, McKnight, and Boston – have taken the steps of not only developing a logic model to describe their grantmaking strategies, but sharing that logic model publicly. Those that don’t publish at least sometimes make them available to peers outside the organization. Once a logic model is “out there,” there’s no taking it back. Therefore, logic models both pull back the curtain on a funder’s current thinking and also make it harder to project the illusion after the fact that a funder knew how things were going to work out all along.

Even more importantly, though, logic models are a victory for transparency with oneself, not just with others. The most important part of any logic model creation process is the set of assumptions revealed about how your program or organization works, and what it needs to be successful. Sometimes these assumptions might seem like no-brainers, and other times they will seem as unproven as they are central. Being comfortable with naming your assumptions as such is not just good practice for ensuring that your organization is constantly learning and growing. It’s also extremely helpful on a psychological level for dealing with the specter of possible failure. Because logic models explicitly draw a distinction between program design and program execution, they acknowledge the very real possibility that you could be doing your job perfectly and your program could still fail, because its theoretical foundation rests upon faulty assumptions. This is an incredibly freeing realization, because it means that radically changing or even scrapping a program that isn’t working doesn’t necessarily have to mean changing program leadership.

One of the reasons people sometimes feel anxious about evaluation and measurement is because they’re afraid of being held accountable, especially to things that they don’t have full control over or to metrics that don’t seem relevant to what they’re trying to do. When that happens, there are enormous incentives on managers and their supervisors to “cook the books” or otherwise game the system to show results that look better than reality, because any failure—even failures that are no one’s fault—reflects on them personally. That’s the danger of trying to enforce a data-driven culture without first developing the theoretical frameworks that determine what data you’re trying to collect. Because logic models separate the person from the program, they can distinguish between lagging initiatives that might just need more time to prove themselves, and failures of design that can be transformed into productive learning opportunities.

A Note About Logic Models and “Proof”

One of the criticisms directed at logic models generally, and by ArtPlace at my post specifically, is that they promote an impossible standard for proof. Here’s what ArtPlace had to say about it:

A critical limitation of elaborate logic models of the style developed by Moss is that it is nearly impossible to quantify or measure all of the different factors and relationships proposed. While many of the asserted relationships are plausible…almost none are measured in practice. Many – if not most – of the variables…could only be gauged imprecisely at great expense or are not susceptible of measurement at all. Multiply this problem by the 50 variables the model uses and the dozens of relationships it asserts, and it’s clear that it is beyond anyone’s ability to actually prove or disprove the model for even a single metropolitan area, much less the nation.

In one sense this criticism is right: these are difficult research challenges that highly qualified professionals have been struggling to address for decades, some of the more promising approaches demonstrated at the recent NEA/Brookings Institution convening on economic development and the arts notwithstanding.

But contrary to popular perception, the term “scientific proof” is a misnomer: proof is a concept germane to mathematics, not science. (Admittedly, my previous post could have been clearer on this point.) Scientists, including social scientists, develop hypotheses about how the world works and then gather evidence to support or undermine those hypotheses. Whereas proof is black-and-white, evidence has shades of gray: it can be strong or weak, circumstantial or conclusive. My colleague Kiley Arroyo made a great courtroom analogy in response to my creative placemaking post: she wrote, “think of forensic analysis if you will. You’re not just going to look at where and how the bullet hit, but what it was shot from, where, by whom and why.” Our job as researchers is not to “prove” anything – instead, it’s to amass evidence in search of the truth.

The biggest problem that I see with most logic models is that they are too simple for what they are trying to describe, and thus consign us to amassing a whole bunch of weak evidence. Logic models are often developed as much for communication purposes as for research, and can thus face intense pressure to be “dumbed down” for public consumption. I frequently hear comments like, “make sure that there are no more than five categories, because after that you’re going to lose people,” and God help you if your diagram doesn’t all fit on one page. But think about it, would you apply this standard to a budget? To a work plan? For a major, mission-critical project with many moving parts? I’m not interested in logic models as a communication tool; I’m interested in them as a means to help us do our jobs better.

In the case of ArtsWave, we do intend to collect data to show progress towards the goals that have been established through the model. But the task is less daunting than it looks. Because we are not trying to “prove” the model, not everything has to be measured directly; indeed, not everything has to be measured at all! Instead, the model serves as a road map for us in considering what is most important to measure, given what we don’t know. Is it that assumption that the arts can differentiate Cincinnati from its peer cities in the minds of tourists and potential residents? Can we be confident that people from diverse backgrounds will interact with each other if they happen to come together for the same community-wide event? A smart research design will test the assumptions that are most in need of testing, and the purpose of the modeling exercise is to identify which assumptions those are.

It’s not like we are all alone in this effort, either. There is an ever-growing body of literature on the ways that the arts interact with communities, and there is no need for us to demonstrate yet again connections for which strong evidence exists in other contexts. Furthermore, since many of the data points involve stakeholders beyond the arts, there is an opportunity to collaborate with other local entities to share resources and develop knowledge infrastructure collaboratively. Cincinnati is home to the STRIVE initiative, which has been made famous in the broader social sector as the poster child for the “Collective Impact” concept as coined by consultants from FSG Social Impact Advisors. One of STRIVE’s chief accomplishments has been the development of a “adaptive learning system” for use by hundreds of education nonprofits in the region, which has helped align the efforts of those organizations around a common set of purposes and benchmarks. If it worked in education, and in the same city no less, why can’t it work in the arts?

But that being said, none of the three benefits that I’ve cited above—articulation, flexibility, and transparency—require “proving” the logic model. They all come, at least in part, just from creating one. And creating a logic model doesn’t have to be a tortured, involved process. It doesn’t have to cost hundreds of thousands of dollars. You can write them out on a back of a napkin (okay, for some things you might need a really big napkin). Creating a logic model for something that will require a lot of your time or money is one of the most highly leveraged activities you can possibly undertake. I hope more arts funders will undertake it.

Share

11 Comments

  1. Posted June 28th, 2012 at 9:27 pm | Permalink

    Hi, Ian,

    I appreciate the care you took to fully explore this issue. And yet, it assumes things that really need to be examined. I’m not going to write the long essay it would take to list them all, but just mention a few main points.

    First, my main critique of logic models is not that they are reductive, although by definition, they are: you take a complex story with many dynamic interacting factors and turn it into a chart. It’s that when funders use them (on the individual project/organization level, as you point out), they have an obvious, instrumental function: to winnow out applicants who don’t want to or aren’t able to ante in with logic models and theories of change. It’s a culling process first and foremost, because once these documents are received, they are seldom studied, let alone used as a grantmaking criterion. Before and after the craze for logic models, most funders gave most money to the groups they know, those vouched for by people they network and feel comfortable with; and after logic models? The same. For grantmaking purposes, show me a set of decisions in which a logic model was demonstrably determinative—I haven’t heard of one.

    Second, the whole concept is grounded in an inadequate idea of instrumentality. It takes far longer than most grant cycles to assess meaningful impact. The best work incorporates action-research, a process of constant experimentation and evaluation that changes tactics (and whole strategies) in response to experience. Whether you’re talking about individual personality, communities, foundations, or anything else involving human beings, if we knew that X input would bring Y result, everyone would do it, no? It’s an especially silly example of scientistic hubris to imagine anyone is truly able to evaluate the kind of assertion most logic models make in the scale and timeframe allotted to demonstrate what is asserted.

    Third, who is going to hold funders accountable for their logic models, as you propose? They are not accountable to grantees (most of whom would not want to endanger their funding by calling out a funder for logic-model failure); and public accountability is for financial stewardship, not results. Saying your grantmaking is based on a certain set of assumptions and principles doesn’t make it so, and there’s no practical way in the current system to ensure that accountability.

    With the current infatuations with metrics, we are in the grip of a way of thinking that is well past its sell-by date. Its main weakness is that it privileges what can be quantified in a field that is fluid, interactive, and organic by nature. You can create a logic model, deciding what can be measured and how to measure it. But so what? All you’ve done is set up a self-ratifying system. If the goals are to enliven communities, strengthen cultural fabric, stimulate public participation, etc., what’s needed is ongoing funding, an experimental spirit, far more willingness to risk than most funders have, and a talent for stories to capture the meaning of it all. The rest is just whistling in the dark.

    all best,
    Arlene

    • Posted June 29th, 2012 at 12:31 pm | Permalink

      Thanks for your challenging feedback as always, Arlene. In one sense I agree with you: that the standards for use regarding logic models could be much higher on the part of many of the funding bodies who either create them or request them. As I wrote in the beginning of the essay, I see logic models as a tool with tremendous potential, but whose potential is not currently being maximized by the field. We could say the same, by the way, about arts research – all too often a small handful of people move mountains to study a particular issue, only to have the fruits of that labor rot on the office shelves of arts practitioners who lack the time to explore the work critically or place it into a larger context. That’s one gap that I try to fill with this blog, though we have a long way to go yet.

      With respect to your specific challenge to “show me a set of decisions in which a logic model was demonstrably determinative,” without being privy to internal deliberations about specific grants, I would imagine that the vast majority of such decisions would not be ones that I would know about. That being said, I can provide a very concrete example in the form of ArtsWave, which just recently announced its first set of grants under its new, logic-model-driven guidelines. As you can see from this news story, some organizations that had been supported for decades lost as much as $240,000 in grant funding, while others saw their grants double or triple. And that’s without a specific mandate to give more money to smaller organizations; the process was largely agnostic on that front. (It’s helpful to keep in mind that ArtsWave is not a private foundation but rather a united arts fund, which means that its historical purpose was to raise money for specific, major institutions.)

      • Posted June 29th, 2012 at 4:27 pm | Permalink

        Ian, I think you’re mistaking the measuring device for the thing itself. ArtWave changed its aim to emphasize collaboration and community impact, and is holding majors to a somewhat higher standard in that way. (Average funding declines of around 10% for majors doesn’t really inoculate them against the usual tipping of the balance in favor of them that’s got; they still get, but it’s a little progress.)

        This could be accomplished with many different tools for planning and assessment. The same criteria and questions could be posed. Surely the grant narratives made all the same points as the logic models, just not in chart form. Again, if the proposals had been identical, excepting the requirement for a display of logic models, would the decisions have been different? Highly dubious, no? The requirement to translate one’s work into this sort of chart is a culling device first and foremost. It accomplishes nothing that couldn’t be achieved with astute questions and dialogue; and astute questions and dialogue would reveal much more.

      • Posted June 29th, 2012 at 5:50 pm | Permalink

        So, one thing to clarify is that the ArtsWave grant process (which is all documented here) does not require the applicant to create a logic model for itself. Instead, ArtsWave’s own logic model was used to make decisions about the specific outcomes that it was looking for from grantees. Applicants were encouraged to describe how their activities would contribute towards these outcomes, and it is on this basis that the funding decisions were made.

        I’m a little on the fence about requiring logic models of grantees. I do think that a version of it that’s stripped down to the conceptual basics (so, e.g., that doesn’t require any graphic design skills) can work well, and I don’t think it’s an unreasonable thing to request of people seeking six-figure grants. But for a lot of situations I suspect it’s asking too much, especially once you start working with practitioners with little expertise in this area. At a minimum I would hope that funders requiring such would offer their applicants workshops in how to do it well, or be willing to point them to equivalent resources.

  2. Posted June 29th, 2012 at 9:16 am | Permalink

    Arlene – your points are well-taken, and (my relationship with Ian notwithstanding) I share some of your skepticism about logic models in practice.

    Your ultimate argument troubles me deeply, though, because it seems to suggest that as a field we (a) have no way of assessing whether we’re doing a good job and, as a result, (b) have no way of tracking whether we’re improving the quality and impact of our work over time. In both cases I suppose you could say that we must rely on a combination of intuition, judgment, and experience, which no doubt all have value. But in a field where many are concerned that our established models and methods are deeply flawed in the modern world, I don’t see how those subjective criteria can be sufficient.

    • Posted June 29th, 2012 at 12:32 pm | Permalink

      I understand your concern, Adam, but in the end, all meaningful criteria are subjective. It takes deep discussion and agreement to choose the right ones and hold ourselves to them honestly, but that’s what’s needed. “Objective” criteria are the weakest (e.g., I worked with an agency that thought it had hit the jackpot with “artist-client contact hours,” until someone realized that the best ratio would be one artist addressing an audience of a thousand, hardly a gauge of meaningful participation).

      Quantification is not truth: it’s just quantification, which is as subject to misinterpretation and manipulation as anything else. It troubles me that when some people contemplate getting rid of quantification as a main mode of assessment, they are left with the feeling you express: well, then, there’s no way of tracking progress and improving. To the contrary! I’ve worked with many other ways of assessing impact, value, accomplishment. Happy to share.

      all best,
      Arlene

      • Posted June 30th, 2012 at 11:08 am | Permalink

        Arlene – see, to me it sounds like you’ve just been working with crummy metrics. I agree that none are perfect and most can be manipulated, but some are certainly better than others.

        For example, in the for-profit world, everyone knows that short-term profit numbers are unreliable because they can be grossly manipulated by timing transactions. Cash flow, however, is a lot harder to fake, and impossible to fake long-term.

        In your example “artist-client contact hours” is a proxy for quantity, but it says nothing about quality. So you’d have to find some other proxy for that – it could be as simple as survey feedback, or it could involve tracking 2nd order impact like long-term changes in indicators of community vitality, etc.

        For me, the process of thinking through different possible metrics and evaluating their strengths and weaknesses is a great way of forcing yourself to question your assumptions and received wisdom.

        I’m curious though, about your ways of assessing impact, value, and accomplishment that don’t rely on any kind of quantification. Can you share some of that? I’m having a hard time even fathoming what that would look like.

        Best,
        Adam

  3. Posted June 29th, 2012 at 11:04 am | Permalink

    Ian–thanks for breaking this down. I’ve been wary of logic models. They seemed overly complex and jargony. As you point out, it always seemed like I could intuitively map out strategies without them. However, I now agree with you that visualizing desired change in this way, may be an effective tool to get organizers to see where the gaps exist, or where folks are not on the same page. FYI, I used to help run the MayDay parade, so I got a kick out of seeing “MayDay Festival and Laura Zabel’s Faith in Humanity” mapped out.

    With appreciation,
    Anne

    • Posted June 29th, 2012 at 5:04 pm | Permalink

      This comment really means a lot to me, Anne. Thank you! And I think Laura enjoyed her custom logic model as well. :)

  4. David Greenham
    Posted July 2nd, 2012 at 7:03 am | Permalink

    This entire discussion is incredibly valuable.

    I’ve worried that the arts tend to follow the model of education (or at least secondary education too often) and jump at the latest ideas because they’re new, without ever digging deep enough to really explore them. In my career it started with the explosion of funding for simple art making and grants were everywhere. (A little theater I ran received thousands of dollars from FEMA (!) to make theater back in the old days – unthinkable today.) Later, partnerships was the key focus, which morphed into non-traditional partnerships. Eventually we waded through all the anti-smoking/anti-drug use funding to pay for some art creation, and Richard Florida inspired us all to consider the concept of creative economy (which I don’t think we ever fully explored or understood because very few were willing to just throw money at it), and now we’re diving into place, artist live/work spaces, art placemaking, etc.

    While it’s challenging and people who are way smarter than I am are raising great questions, as artists it seems that we’re better off digging deeper into the meaning and value of the ideas and really acting on them, rather than continuing to have this idea that if we just find the magic key phrases or concepts we can open the door to endless funding and support for our work.

    So thanks Ian! Keep digging!

  5. Posted July 2nd, 2012 at 9:59 am | Permalink

    Ian – we met in 2009 at AFTA’s Pre Conference on Creative Economy. I hope you’re well.

    This article was very informative and I thank you for it. I also wanted to offer some resources to the table in defense of the logic model. We at the Houston Arts Alliance utilize logic models only for our Capacity Building Initiative grants, not our arts programming grants. It has helped us understand what exactly we are investing and what impact our grantees want to achieve. Most importantly, we make it a requirement for our beneficiaries to share their impacts as best practices & lessons learned to the field. These “Learning Sessions” allow for peer-to-peer critique and sharing of our investmetns, leaving organizations with new models and ways of building their org infrastructure.

    http://www.learningsessions1.eventbrite.com
    http://www.learningsessions2.eventbrite.com

    You can access these presentations on Slideshare (www.slideshare.com) with search terms: learning sessions HAA capacity building

    Our program framework has been guided and influenced by the resources and networks of Grantmakers for Effective Organizations (www.geofunders.org). I highly recommend grantmakers in the arts to utilize this resource to learn/apply strategies in the area of capacity building and evaluation.

    In addition, the Innovation Network (www.innonet.org) provides user friendly tools to create logic models online. Just plug in the narratives on each logic model component, and it builds the logic model graph for you! There are workbooks to guide you on what a LM is and how you can graph it. And the resource is FREE!

    Keep in touch!
    Jerome

One Trackback

  1. [...] a clear logic model to relate our activities to their intended outcomes and impacts (here’s more about what a logic model is). This allows us both to explain ourselves to external funders and to have clear internal criteria [...]

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>