If a computer composes a symphony, should the resulting musical piece be considered a work of art? And how does a computer-generated work affect our perception of human-made works? These are not theoretical questions. A recent article in Pacific Standard highlights Simon Fraser University’s Metacreation project, which aims to investigate computational creativity, in part through the development of “artificially creative musical systems.”
This past June, three members of the project— researchers Arne Eigenfeldt, Adam Burnett and Philippe Pasquiere— presented an evaluation study of their musical works composed by software programs at the 2012 International Conference on Computational Creativity. At a public concert in which both human-composed and computer-assisted music were performed by a professional string quartet, percussionist and a Disklavier (a mechanized piano that interprets computer input), audience members were unable to differentiate between music generated by a computer and music written by a human composer, regardless of their familiarity with classical music.
The Metacreation project is not the only example of advances in artificial intelligence (AI). David Cope’s Experiments in Musical Intelligence (EMI) is a software system that analyzes existing music, and then generates original compositions in the same style. What’s more, such advances aren’t limited to musical arrangements. In 2008, the Russian publishing house Astrel SPb released True Love, a 320-page novel written in 72 hours by a computer program. And the Tate Gallery, SFMOMA and the Brooklyn Museum are among the institutions that have exhibited paintings made by AARON, an autonomous art-making program created by Harold Cohen. Indeed, computers’ capabilities now rival cognitive functions once thought to be intrinsically human. Computers can form links, evaluate, and even make novel works; they can function in ways that we think of as creative. The obvious question is, if computers are performing creatively, should we consider the resulting works art?
The simplest answer, and in many ways most appealing to the human ego, is that no, these computers are not making art. Art requires intention. This is why projects like Rirkrit Tiravanija’s Untitled 1993 (Café Deutschland), in which the artist set up a functioning cafe in a private gallery in Cologne, or Lee Mingwei’s The Living Room, in which Mr. Lee transformed a gallery into a living room and selected volunteers to act as hosts, are art; their makers intended them as such. By contrast, EMI, AARON and other AI systems have no sentient intentions to make art, or anything else. Therefore, the works they create are not art, although they could be considered as such if a human had made them. Instead, it’s the software itself that is the art, and its programmers the artists.
By this reasoning, even if the computer-generated works are, in fact, works of art, they are authored not by the computer, but by human software designers. The computer is merely a tool for making art, analogous to a brush or musical instrument. As a 2010 Pacific Standard article “The Cyborg Composer” quotes EMI’s creator David Cope:
’All the computer is is just an extension of me,’ Cope says. ‘They’re nothing but wonderfully organized shovels. I wouldn’t give credit to the shovel for digging the hole. Would you?’
Indeed, the works created by EMI, AARON and in the Metacreation project are products of the information that their programmers choose to input. “The Cyborg Composer” details Cope’s process:
This program would write music in an odd sort of way. Instead of spitting out a full score, it converses with Cope through the keyboard and mouse. He asks it a musical question, feeding in some compositions or a musical phrase. The program responds with its own musical statement. He says “yes” or “no,” and he’ll send it more information and then look at the output. The program builds what’s called an association network — certain musical statements and relationships between notes are weighted as ‘good,’ others as ‘bad.’ Eventually, the exchange produces a score, either in sections or as one long piece.
Similarly, AARON’s paintings rely on the knowledge that Harold Cohen enters. AARON’s paintings all have similar subjects—mostly people standing with plants. In an interview for PBS’s Scientific American Frontiers, Cohen explains:
AARON can make paintings of anything it knows about, but it actually knows about very little—people, potted plants and trees, simple objects like boxes and tables, decoration. From time to time I wonder whether it wouldn’t be a good idea to tell it about more, different, things, but I can never persuade myself that it would be any better for knowing how to draw a telephone, for example. So I always end up trying to make it draw better, not more.
Certainly, computers will continue to evolve as tools that artists can use. But what if computers themselves become advanced enough to design the software that is used to create paintings, sculptures, symphonies or stories? Who is the artist then? We wouldn’t give credit to Tony Smith for his daughter Kiki Smith’s drawings and sculptures. So by the same token, it wouldn’t make sense to credit a programmer for software that his program created. The computer would undeniably be the artist. However, as we create computers and software that are capable of making works that aesthetically can’t be distinguished from artworks made entirely by the human hand, these sort of works may become less appealing and desirable. The process becomes just as important as the finished work, and we’ll esteem the imperfections of the human hand. In fact, this is an extension of the craft and farm-to-table movements that are currently in vogue. Just as one might prefer a hand-stitched scarf to one that is digitally embroidered, or the misshapen heirloom tomatoes to their perfectly round supermarket counterparts, so too might an art collector choose a human-made painting over one that was computer generated. Furthermore, as computers become more capable of creating art objects, we’ll see a shift toward art that is less object-centric and more experience-centric. We’ll see more projects like those of Rirkrit Tiravanija and Lee Mingwei—participatory, interactive and socially-engaged art.
Walter Benjamin’s “The Work of Art in the Mechanical Age of Reproduction” considers the potential effects of photography and film, then new media, on the arts. In the seminal 1936 essay, Benjamin discusses the decline of the autonomous aesthetic experience resulting from the loss of “aura,” or the sense of detached authority that lies in original, one–of-a-kind works. It makes sense that the computer-generation, rather than reproduction, of art might lead to a similar loss. After all, artwork that can be created by the computer becomes less special as it becomes less obscure. If we have a tool that can generate a perfect symphony or painting, it becomes less interesting to make these things at all. Accordingly, as computational creativity advances, artists may become less concerned with creating beautiful music or paintings or objects and more concerned with making something that is not so easily produced by a series of mathematical functions.
However, what if we enter Isaac Asimov and Philip K. Dick territory, into a world where computers are not merely executing a set of algorithms, but are actually thinking in the human sense? What if we eventually create computers that possess intentionality? I subscribe to philosopher John Searle’s theory that this sort of artificial intelligence is impossible. Searle argues that computers can simulate, but not duplicate human thinking, and illustrates his contention with a thought experiment, “The Chinese Room.” Searle asks us to imagine a computer into which a Chinese speaker can input Chinese characters. By following the instructions of a software program, the computer then outputs Chinese characters that appropriately respond to what was entered, so that any Chinese speaker would be convinced that he or she was talking to another Chinese-speaking person. Searle then offers another scenario: suppose that he (an English speaker who does not speak Chinese) is in a room and is given a set of Chinese characters. He’s also given a set of instructions, in English that he follows to create responses in Chinese that will convince any Chinese speaker that he or she is conversing with the same. In fact, Searle would be following a program, just as the computer in the first scenario did, to create his responses. In his Behavioral and Brain Sciences journal article “Mind, brains, and programs” he explains that although he was able to generate intelligent responses in Chinese, he’s still unable to understand Chinese. And because Searle is merely replicating the computer in analog form, if Searle cannot understand Chinese, the computer cannot either. Therefore, although the computer may be able to look as though it holds a human level of comprehension, its actual intelligence is more superficial.
For the same reasons, Schank’s computer understands nothing of any stories, whether in Chinese, English, or whatever, since in the Chinese case the computer is me, and in the cases where the computer is not me, the computer has nothing more than I have in the case where I understand nothing.
But even if we do believe that computers with feelings are the future of science and not science fiction, we’ll have essentially created another intelligent life form, just one that is not carbon-based. At that point, these beings are less computer and more human. They may indeed have intentionality, and with that all of the emotional baggage and thought capacity that can be both a help and a hindrance when creating a masterpiece.