A painting made by Harold Cohen’s computer program, AARON. Photo by Conall O’Brien

If a computer composes a symphony, should the resulting musical piece be considered a work of art? And how does a computer-generated work affect our perception of human-made works? These are not theoretical questions. A recent article in Pacific Standard highlights Simon Fraser University’s Metacreation project, which aims to investigate computational creativity, in part through the development of “artificially creative musical systems.”

This past June, three members of the project— researchers Arne Eigenfeldt, Adam Burnett and Philippe Pasquiere— presented an evaluation study of their musical works composed by software programs at the 2012 International Conference on Computational Creativity. At a public concert in which both human-composed and computer-assisted music were performed by a professional string quartet, percussionist and a Disklavier (a mechanized piano that interprets computer input), audience members were unable to differentiate between music generated by a computer and music written by a human composer, regardless of their familiarity with classical music.

The Metacreation project is not the only example of advances in artificial intelligence (AI). David Cope’s Experiments in Musical Intelligence (EMI) is a software system that analyzes existing music, and then generates original compositions in the same style. What’s more, such advances aren’t limited to musical arrangements. In 2008, the Russian publishing house Astrel SPb released True Love, a 320-page novel written in 72 hours by a computer program. And the Tate Gallery, SFMOMA and the Brooklyn Museum are among the institutions that have exhibited paintings made by AARON, an autonomous art-making program created by Harold Cohen. Indeed, computers’ capabilities now rival cognitive functions once thought to be intrinsically human. Computers can form links, evaluate, and even make novel works; they can function in ways that we think of as creative. The obvious question is, if computers are performing creatively, should we consider the resulting works art?

The simplest answer, and in many ways most appealing to the human ego, is that no, these computers are not making art. Art requires intention. This is why projects like Rirkrit Tiravanija’s Untitled 1993 (Café Deutschland)in which the artist set up a functioning cafe in a private gallery in Cologne, or Lee Mingwei’s The Living Room, in which Mr. Lee transformed a gallery into a living room and selected volunteers to act as hosts, are art; their makers intended them as such. By contrast, EMI, AARON and other AI systems have no sentient intentions to make art, or anything else. Therefore, the works they create are not art, although they could be considered as such if a human had made them. Instead, it’s the software itself that is the art, and its programmers the artists.

By this reasoning, even if the computer-generated works are, in fact, works of art, they are authored not by the computer, but by human software designers. The computer is merely a tool for making art, analogous to a brush or musical instrument. As a 2010 Pacific Standard article “The Cyborg Composer” quotes EMI’s creator David Cope:

’All the computer is is just an extension of me,’ Cope says. ‘They’re nothing but wonderfully organized shovels. I wouldn’t give credit to the shovel for digging the hole. Would you?’

Indeed, the works created by EMI, AARON and in the Metacreation project are products of the information that their programmers choose to input. “The Cyborg Composer” details Cope’s process:

This program would write music in an odd sort of way. Instead of spitting out a full score, it converses with Cope through the keyboard and mouse. He asks it a musical question, feeding in some compositions or a musical phrase. The program responds with its own musical statement. He says “yes” or “no,” and he’ll send it more information and then look at the output. The program builds what’s called an association network — certain musical statements and relationships between notes are weighted as ‘good,’ others as ‘bad.’ Eventually, the exchange produces a score, either in sections or as one long piece.

Similarly, AARON’s paintings rely on the knowledge that Harold Cohen enters. AARON’s paintings all have similar subjects—mostly people standing with plants. In an interview for PBS’s Scientific American Frontiers, Cohen explains:

AARON can make paintings of anything it knows about, but it actually knows about very little—people, potted plants and trees, simple objects like boxes and tables, decoration. From time to time I wonder whether it wouldn’t be a good idea to tell it about more, different, things, but I can never persuade myself that it would be any better for knowing how to draw a telephone, for example. So I always end up trying to make it draw better, not more.

Certainly, computers will continue to evolve as tools that artists can use. But what if computers themselves become advanced enough to design the software that is used to create paintings, sculptures, symphonies or stories? Who is the artist then? We wouldn’t give credit to Tony Smith for his daughter Kiki Smith’s drawings and sculptures. So by the same token, it wouldn’t make sense to credit a programmer for software that his program created. The computer would undeniably be the artist. However, as we create computers and software that are capable of making works that aesthetically can’t be distinguished from artworks made entirely by the human hand, these sort of works may become less appealing and desirable. The process becomes just as important as the finished work, and we’ll esteem the imperfections of the human hand. In fact, this is an extension of the craft and farm-to-table movements that are currently in vogue. Just as one might prefer a hand-stitched scarf to one that is digitally embroidered, or the misshapen heirloom tomatoes to their perfectly round supermarket counterparts, so too might an art collector choose a human-made painting over one that was computer generated. Furthermore, as computers become more capable of creating art objects, we’ll see a shift toward art that is less object-centric and more experience-centric. We’ll see more projects like those of Rirkrit Tiravanija and Lee Mingwei—participatory, interactive and socially-engaged art.

Walter Benjamin’s “The Work of Art in the Mechanical Age of Reproduction” considers the potential effects of photography and film, then new media, on the arts. In the seminal 1936 essay, Benjamin discusses the decline of the autonomous aesthetic experience resulting from the loss of “aura,” or the sense of detached authority that lies in original, one–of-a-kind works. It makes sense that the computer-generation, rather than reproduction, of art might lead to a similar loss. After all, artwork that can be created by the computer becomes less special as it becomes less obscure. If we have a tool that can generate a perfect symphony or painting, it becomes less interesting to make these things at all. Accordingly, as computational creativity advances, artists may become less concerned with creating beautiful music or paintings or objects and more concerned with making something that is not so easily produced by a series of mathematical functions.

However, what if we enter Isaac Asimov and Philip K. Dick territory, into a world where computers are not merely executing a set of algorithms, but are actually thinking in the human sense? What if we eventually create computers that possess intentionality? I subscribe to philosopher John Searle’s theory that this sort of artificial intelligence is impossible. Searle argues that computers can simulate, but not duplicate human thinking, and illustrates his contention with a thought experiment, “The Chinese Room.” Searle asks us to imagine a computer into which a Chinese speaker can input Chinese characters. By following the instructions of a software program, the computer then outputs Chinese characters that appropriately respond to what was entered, so that any Chinese speaker would be convinced that he or she was talking to another Chinese-speaking person. Searle then offers another scenario: suppose that he (an English speaker who does not speak Chinese) is in a room and is given a set of Chinese characters. He’s also given a set of instructions, in English that he follows to create responses in Chinese that will convince any Chinese speaker that he or she is conversing with the same. In fact, Searle would be following a program, just as the computer in the first scenario did, to create his responses. In his Behavioral and Brain Sciences journal article “Mind, brains, and programs” he explains that although he was able to generate intelligent responses in Chinese, he’s still unable to understand Chinese. And because Searle is merely replicating the computer in analog form, if Searle cannot understand Chinese, the computer cannot either. Therefore, although the computer may be able to look as though it holds a human level of comprehension, its actual intelligence is more superficial.

For the same reasons, Schank’s computer understands nothing of any stories, whether in Chinese, English, or whatever, since in the Chinese case the computer is me, and in the cases where the computer is not me, the computer has nothing more than I have in the case where I understand nothing.

But even if we do believe that computers with feelings are the future of science and not science fiction, we’ll have essentially created another intelligent life form, just one that is not carbon-based. At that point, these beings are less computer and more human. They may indeed have intentionality, and with that all of the emotional baggage and thought capacity that can be both a help and a hindrance when creating a masterpiece.

  • Pingback: Will Artificial Intelligence ever create Artificial Art? | CARTER GILLIES POTTERY()

  • Hey Jacquelyn,

    I thought this was a great post. And I absolutely agree that a good way to make sense of art is to pose this question concerning AI. I thought you did a great job of running down some of the issues and implications.

    It turns out that when I caught the link to your post the other day I had been mulling over related issues, and seeing your post prompted me to organize some thoughts on the subject. I couldn’t see a way of contacting you outside the comments to this blog so I thought I’d give you a holler here and suggest that you might get a kick out of the ramblings I undertook to this question on my own blog. You can follow the link in the trackback above.

    I’d also be curious what you thought of the arguments against Searle in that paper you linked to. Some of them were serious objections, I thought. My own reading of the Chinese Room is from a Wittgensteinian bent, so I’m probably not as charitable about Searle’s conclusions as I am about his project. It does all seem to relate back to art. And our confusions about art are often influenced by our unwillingness to ask these kinds of questions.

    So “Bravo!” for raising the issue. Maybe one day folks will start to take it seriously.

    Thanks for your always great posts! Keep up the good work!

    • Jacquelyn Strycker

      Thanks so much for your thoughtful comments, Carter! Certainly, in her work, Computer Models of the Mind, Margaret Boden raises pokes some holes in Searles’s argument:

      “Computational psychology does not credit the brain with seeing bean-sprouts or understanding English: intentional states such as these are properties of people, not of brains.”

      “In short, Searle’s description of the robot’s pseudo-brain (that is, of Searle-in-the-robot) as understanding English involves a category-mistake comparable to treating the brain as the bearer, as opposed to the causal basis, of intelligence.”

      In other words, our brains aren’t simply a container of thoughts and answers; they generate these things. Is the Chinese Room program generating thoughts and answers in Chinese? Wouldn’t it follow that it then, understands Chinese?

      Ned Block, and later Ray Kurzweil also both present powerful arguments for the man in the Chinese Room as implementer, which is not the same as being the system itself. Just because the man doesn’t understand Chinese, doesn’t mean the Room/ Machine/ Program doesn’t understand Chinese. The man is actually a tool for the system, rather than the system being a tool for the man. That’s a trip—and fuel for some art!

      All that said, I don’t think that the human mind is the same as a computer program, and the AI that we are now seeing developed at this time treats it as such. Perhaps a more accurate analogy for the human brain would be many programs, simultaneously working together, and at times contradicting one another. But I suppose that eventually, it’s not out of the realm of possibility that a complex AI like this could be built. In which case, I think of it as a new sort of life form, and I like the conclusion that you came to in the post you wrote on your own blog:

      “If human art is essentially a breaking of human rules and going off the beaten human path, just what would that endeavor look like to an artificial intelligence? Clearly we would not be expecting the next Vincent van Gogh. He was all too human in his failings and his transcendence. How might a machine fail and transcend? Not facing a human life it would have to be different. Right? Would we even recognize its performances as art? It’s an interesting question that has no answer besides the weak human one. It was after all a human question….”

  • Kay

    Just because someone wants to think that their random ideas are profound doesn’t make them profound. Appreciation of art is much more a matter of how the person seeing it responds to it than what the creator intends.

    If someone sees a painting and considers it the most beautiful they’ve ever seen – does it matter that that person wasn’t told that it was made by a machine and thus its somehow automatically inferior to the work of some mediocre mind?

  • Pingback: Preparing for the next digital revolution at the MIT Sloan CIO Symposium | Commentary by Allan()

  • Jorge Luis Lopez Chahoud

    during the 19th century did anyone think that we could land a man on the moon, I think AI creativity is easier to accomplish.