<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Createquity.Createquity.</title>
	<atom:link href="https://createquity.com/tag/research-design/feed/" rel="self" type="application/rss+xml" />
	<link>https://createquity.com</link>
	<description>The most important issues in the arts...and what we can do about them.</description>
	<lastBuildDate>Wed, 15 Jul 2020 20:17:39 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>Science Doesn&#8217;t Have All the Answers: Should We Be Worried?</title>
		<link>https://createquity.com/2012/11/science-doesnt-have-all-the-answers-should-we-be-worried/</link>
		<comments>https://createquity.com/2012/11/science-doesnt-have-all-the-answers-should-we-be-worried/#comments</comments>
		<pubDate>Thu, 08 Nov 2012 14:11:02 +0000</pubDate>
		<dc:creator><![CDATA[Talia Gibas]]></dc:creator>
				<category><![CDATA[Research]]></category>
		<category><![CDATA[confirmation bias]]></category>
		<category><![CDATA[Createquity Fellowship]]></category>
		<category><![CDATA[evaluation]]></category>
		<category><![CDATA[impact assessment]]></category>
		<category><![CDATA[measurement in the arts]]></category>
		<category><![CDATA[research design]]></category>

		<guid isPermaLink="false">https://createquity.com/?p=4071</guid>
		<description><![CDATA[On October 1 the science section of the New York Times ran two articles next to each other. One of them describes a recent study that concluded young children at play display behaviors similar to those of scientists, suggesting scientific inquiry is driven by human instinct. The other refers to the alarming extent to which<a href="https://createquity.com/2012/11/science-doesnt-have-all-the-answers-should-we-be-worried/" class="read-more">Read&#160;More</a>]]></description>
				<content:encoded><![CDATA[<div style="width: 510px" class="wp-caption aligncenter"><a href="http://www.flickr.com/photos/chachlate/5690684773/"><img fetchpriority="high" decoding="async" title="Double-blind study" src="http://farm6.staticflickr.com/5184/5690684773_33660aa857.jpg" alt="Double-blind study" width="500" height="492" /></a><p class="wp-caption-text">&#8220;a double-blind study,&#8221; photograph by Casey Holford</p></div>
<p>On October 1 the science section of the New York <em>Times</em> ran two articles next to each other. <a href="http://www.nytimes.com/2012/10/02/science/scientific-inquiry-among-the-preschool-set.html?_r=0">One of them</a> describes a <a href="http://www.sciencemag.org/content/337/6102/1623.abstract">recent study</a> that concluded young children at play display behaviors similar to those of scientists, suggesting scientific inquiry is driven by human instinct. The <a href="http://www.nytimes.com/2012/10/02/science/study-finds-fraud-is-widespread-in-retracted-scientific-papers.html?_r=2">other</a> refers to the alarming extent to which that human instinct muddies scientific inquiry along the way.</p>
<p>Recently the scientific community has dealt with controversies cascading across many areas of research.  Most of them relate to a phenomenon known as <a href="http://en.wikipedia.org/wiki/Publication_bias">publication bias</a>.  Put simply, publication bias occurs when research journals prioritize studies with thought-provoking—and at the very least statistically significant—results. This makes sense; it’s hard to get excited about studies that don’t show anything conclusive. We crave good stories, stunning breakthroughs, and world-changing discoveries. Such desire has driven scientific (and artistic) innovation throughout history.</p>
<p>The dark underbelly of this lust for meaning, however, is something called “significance chasing.” Researchers know their chances of getting published – and advancing their professional status – hinge on getting statistically significant results.  They have a huge incentive to hunt for and read into anomalies in data – raising the possibility of over-interpreting those anomalies as due to something other than chance. An <a href="http://www.geography.unt.edu/~rice/geog5190/5190handouts/falsepositives.pdf">article in the journal </a><em><a href="http://www.geography.unt.edu/~rice/geog5190/5190handouts/falsepositives.pdf">Psychological Science</a> </em>illustrates this point eerily well.  As the authors point out,</p>
<blockquote><p>It is common (and accepted practice) for researchers to explore various analytic alternatives, to search for a combination that yields ‘statistical significance,’ and then to report only what ‘worked’… This exploratory behavior is not the by-product of malicious intent, but rather the result of two factors: (a) ambiguity in how best to make these decisions and (b) the researcher’s desire to find a statistically significant result.</p></blockquote>
<p>To compound the problem, many researchers do not openly share their full data sets or calculation methods, and have few incentives to challenge one another’s findings.  The <em>Psychological Science</em> article hammers the former point home with a simulated experiment that “shows” listening to a Beatles song makes you older.  That’s hooey, of course, but the authors’ point is that without stricter guidelines around how data sets are reported, nearly any relationship can be presented as statistically significant.</p>
<p>How big of a problem is this? In the medical community it has raised frightening <a href="http://www.reuters.com/article/2012/03/28/us-science-cancer-idUSBRE82R12P20120328">questions about cancer studies</a> that had been the basis for new treatments. It has caused <a href="http://www.nature.com/embor/journal/v9/n1/full/7401143.html">an increase in the number of retractions</a> issued in high-profile scientific journals – and a <a href="http://retractionwatch.wordpress.com/">blog devoted to tracking them</a>. And lest you think this concern is limited to the “hard” sciences, think again – it has already raised discussions of implications in <a href="http://aidontheedge.info/2012/05/29/reflections-on-bias-and-complexity/">humanitarian aid</a> and in the <a href="http://www.forbes.com/sites/freekvermeulen/2012/01/06/publication-bias-or-why-you-cant-trust-any-of-the-research-you-read/">more mainstream business community</a> (the latter summing things up nicely with a headline, “Why You Can’t Trust Any of the Research You Read”).</p>
<p>Yikes.</p>
<p>The idea that the scientific method is easily mucked up opens up a whole host of mind-bending questions. (What if there’s a publication bias toward studies about publication bias?  Eeek…). It forces us to stop and think about the fledgling world of arts research – a world that has desperately wanted to find good, hard scientific evidence of impact for a long time. Randomized controlled trials, double-blind studies and other sophisticated research methods seemed like a holy grail, promising that if we could cleverly adapt them to meet our needs, we would have indisputable evidence of the importance of the arts, and good, hard data to guide how we direct our resources. In light of these controversies, should we question our desire to be better researchers?</p>
<p>No – but we should learn from others’ mistakes, and take a hard look at institutional issues common across our fields. Many of the problems the scientific community is experiencing aren’t about the tools scientists have at their disposal, but the cultures in which those tools are used. A few months ago the editors of two high-profile medical journals, Drs. Ferric Fang and Arturo Casadevall, <a href="http://iai.asm.org/content/80/3/897.full">put out a call for “structural reforms”</a> to combat a “hypercompetitive” and “insecure” working environment they believe to be the heart of the issue. The structural flaws they identify include inadequate resources, a “leaky pipeline” of emerging talent, agenda-driven funding and administrative bloat.</p>
<p>Sound familiar?</p>
<p>The long-term implications on all research communities will unfold over time. Many of Fang and Casadevall’s recommendations are similar to those made within our own field: directing more funding toward salary support to increase job stability, streamlining grant application and reporting processes, and examining the strengths and weaknesses of peer grant review. A number of other ideas have been floated that may change established research practices. Creating a “<a href="http://blog.givewell.org/2012/06/11/meta-research/">journal of good questions”</a> that decides which studies to publish before their results are known would reward researchers for their curiosity and the strength of their proposed methodology. <a href="http://www.geography.unt.edu/~rice/geog5190/5190handouts/falsepositives.pdf">Limiting the “degrees of freedom”</a> researchers have in gathering additional data if their original data set does not yield anything “interesting” would limit significance chasing and, in theory, create a culture more tolerant of inconclusive results.</p>
<p>Regardless of which, if any, of these ideas stick, we need to acknowledge two things: a) our research is in all likelihood as prone, if not more prone, to these problems as the “hard sciences,” and b) the “best practices” we have been trying to emulate are not “fixed practices.” It’s often said that what arts researchers seek to measure is too squishy to fit into the traditional scientific process. If more and more people are realizing the process has a squish of its own – well then, maybe we don’t need to play “catch up” so much as try new things.</p>
<p>We may even come up with ideas useful to the more “established” fields we have been trying to emulate. The authors of the study in the first (less depressing) New York <em>Times</em> article concluded the preschoolers they observed behaved like scientists because they “form[ed] hypotheses, [ran] experiments, calculat[ed] probabilities and decipher[ed] causal relationships about the world.” I suspect that a group of arts researchers, observing the same group of children, would have interpreted those same behaviors as artistic. Human instinct drives scientific inquiry and artistic inquiry, and muddies both. Artists, one could argue, are a little more used to the mud.</p>
]]></content:encoded>
			<wfw:commentRss>https://createquity.com/2012/11/science-doesnt-have-all-the-answers-should-we-be-worried/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>NAMP Blog Salon posts</title>
		<link>https://createquity.com/2011/10/namp-blog-salon-posts/</link>
		<comments>https://createquity.com/2011/10/namp-blog-salon-posts/#respond</comments>
		<pubDate>Thu, 13 Oct 2011 03:11:21 +0000</pubDate>
		<dc:creator><![CDATA[Ian David Moss]]></dc:creator>
				<category><![CDATA[Economy]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[AFTA]]></category>
		<category><![CDATA[arts marketing]]></category>
		<category><![CDATA[NAMP]]></category>
		<category><![CDATA[research design]]></category>

		<guid isPermaLink="false">https://createquity.com/?p=2808</guid>
		<description><![CDATA[Last week, I participated in the National Arts Marketing Project Blog Salon over at Americans for the Arts. My two entries focused on applying research and feedback-gathering principles to a marketing context. Not the typical Createquity fare, but if you find such things of interest, here&#8217;s some more information below. Is Your Arts Programming Usable?<a href="https://createquity.com/2011/10/namp-blog-salon-posts/" class="read-more">Read&#160;More</a>]]></description>
				<content:encoded><![CDATA[<p>Last week, I participated in the <a href="http://blog.artsusa.org/tag/october-2011-blog-salon/">National Arts Marketing Project Blog Salon</a> over at Americans for the Arts. My two entries focused on applying research and feedback-gathering principles to a marketing context. Not the typical Createquity fare, but if you find such things of interest, here&#8217;s some more information below.</p>
<p><strong><a href="http://blog.artsusa.org/2011/10/05/is-your-arts-programming-usable/">Is Your Arts Programming Usable?</a></strong> considers the concept of usability testing taken outside of its usual tech- or product-specific milieu. Here&#8217;s an excerpt:</p>
<blockquote><p>At Fractured Atlas, we’re in the process of rolling out a few new technology products that have been in the pipeline for the past year or so. One of these is <a href="http://www.artful.ly/" target="_blank">Artful.ly</a>, which is the hosted version of the <a href="http://athena.fracturedatlas.org/" target="_blank">ATHENA open-source ticketing and CRM platform</a> that was released earlier this year. Another is a calendar and rental engine add-on to our performing arts space databases in <a href="http://nycpaspaces.org/" target="_blank">New York City</a> and the <a href="http://www.bayareaspaces.org/" target="_blank">San Francisco Bay Area</a> that will allow visitors to the site to reserve and pay for space directly online.</p>
<p>For both of these resources, we felt it was important to get feedback from actual users before proceeding with a full launch. So we engaged in a round of what’s called <a href="http://en.wikipedia.org/wiki/Usability_testing" target="_blank">usability testing</a>. Usability testing differs from focus groups in that it involves the observation of participants <em>as they actually use the product</em>. So, rather than have people sit around a room and talk about (for example) how they might react to a new feature or what challenges they face in their daily work, you have people sitting in front of a computer and trying to navigate a website’s capabilities while staff members look over their shoulders and take notes.</p></blockquote>
<p><strong><a href="http://blog.artsusa.org/2011/10/07/whither-the-time-machine-considering-the-counterfactual-in-arts-marketing/">Whither the Time Machine? Considering the Counterfactual in Arts Marketing</a></strong> explains why deducing what <em>would have </em>happened if things had gone differently is the central problem of arts research, and offers a couple of examples of how arts marketing can take advantage of control groups.</p>
<blockquote><p>In a marketing-specific context, counterfactual scenarios come into play when considering alternative strategies aimed at driving sales or conversions. One technique that a number of organizations have used is called <a href="http://en.wikipedia.org/wiki/A/B_testing" target="_blank">A/B testing</a>, which is when two different versions of, say, a newsletter or a website get sent to random segments of your target audience.</p>
<p>Internet technology makes A/B testing relatively painless to execute: in the case of a newsletter, for example, all it requires is a random sorting algorithm in Excel to divide the list in two before sending the slightly different newsletter versions to the lists as you normally would. You could test which design results in more clickthroughs to a specific link or which subject line results in a higher open rate.</p>
<p>By creating an A group and a B group, you are finding a way to test the counterfactual without the use of a time machine to go back and try things a different way. Assuming the groups truly are random and the sample size isn’t tiny, it’s a really great way of getting reliable information on what you’re doing.</p>
<p>A/B testing is not the only way of pursuing this kind of inquiry, however. Sometimes it’s not that easy to simply divide your target audience into two.</p></blockquote>
<p>Enjoy!</p>
]]></content:encoded>
			<wfw:commentRss>https://createquity.com/2011/10/namp-blog-salon-posts/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
