Artificial Intelligence as Artist (intuitions and requirements)

Estimated read time (minus contemplative pauses): 18 min.

I’d like to play around with an intuition we might call the “blank page test.”

Say I tell you my computer can create art. To prove it, I print a blank page. There are widely accepted conceptions of art that include things like blank pages as artworks. So I consider my claim vindicated. Feel free to catalog the piece as visual art or as an epic poem composed only of spaces, or however else you like.

I hope you’d tell me that my computer hasn’t made art. Even were I to change my “print” button to read “make art.” And even were I to create a program that generates an artist statement to contextualize the blank page, which would be easy to do; even easier would be to do what some artists do: hire a human to write the artist statement on the computer’s behalf.

I doubt such moves would make much difference. I doubt, for example, that they would convince an AI researcher, or venture capitalist, interested in developing creative machine intelligence that any interesting problems have been solved.

Having summoned this basic intuition, let’s give it more to work with.

The “blank page” example has an even simpler analog in music. I need only to note that my computer is silent. After all, composer John Cage’s 4′33″ consists only of silence, or of environmental sounds, depending on your perspective.

Cage himself often called it his “silent” piece. It was influenced, by the way, by Robert Rauschenberg’s canvases painted over in white house paint:

To Whom It May Concern:
The white paintings came
first; my silent piece
came later.
                                         —J.C.

(From page ??? the 2011 book Silence: Lectures and Writings, 50th Anniversary Edition.)

Such examples are critical to keep in mind as we go along here. One of my claims is that, in order for us to consider “art-creating machine intelligence” problem solved, such machines must, at a minimum, be able to get away with producing any sort of art that a human can get away with producing. This is what I meant above by the “blank page test.” The above attempts obviously fail. But why?

A key concern is that the computer itself is not deciding—in any sense of the word “deciding” we care about in this context—to call the blank paper or silence art. To see what I mean, suppose I make a random number generator so that the computer either will or will not, before printing a blank page, flash its camera light twice to represent “what I print next is art.” Or I could even tell it to announce those words out loud.

If the computer signals that it is about to make art, there is a sense in which we can say it has “decided” to make art. But this is not the sense we care about in the art-making context.

At least, I’m certain no such move will convince AI researchers that I’ve made a program that is deciding, in the relevant way, to make art. For one thing, the computer isn’t learning anything in this scheme. It’s not really an AI, in fact. But suppose it did learn something, and even altered its own programing in some way in response to what it learns. This, in itself, still won’t be enough. I’m not quite ready to show this, however.

First, another variation.

Instead of a blank page, say I tell my computer to print a few shapes or pixel blobs or characters at random. This is no help.

Say I tell it to print the Mona Lisa. This is obviously barely different than printing a blank page. And for the same reason we don’t accuse a printing machine of plagiarism or of counterfeiting.

Say I tell the computer to print the Mona Lisa, but to first partition it into ten random rectangles and to randomly shuffle them. This may produce interesting or pleasing or boring visual results, but is obviously not what we’re after.

This is a good point for pausing the examples to say a few words about what we are after.

There is a critical distinction between a machine designed to make art specifically for humans to experience—to enjoy, hate, or shrug at—and a machine designed to literally be an artist (or, at least, what some humans would sincerely take to be an artist).

The former case, in which we just want a computer to output novel stimuli for humans to perceive, is an easier problem to solve. And I’d say, in fact, is not nearly as interesting a goal as the latter, in which computers are, in some real and robust and mysterious sense, creative.

So it is the latter that I’ll focus on for the rest of this writing. But first I’ll try to make the distinction more vivid.

If I create or use a computer program that makes original music (which, for convenience, we may here define as un-copyrighted music), we might say that the program gets credit for the music, but not as an artist. In other word, not as a copyright holder. I would get the copyright. And in fact, depending on various factors that are too complicated to get into here, I might uncontroversially get credit as an artist.

For example, composers such as the aforementioned Cage have used chance operations, such as rolling dice, to compose music. People who do this sort of thing have told me, “sometimes I let the universe decide where the music should go.” But we still credit the dice-roller as the composer (and, certainly, they get the copyright).

If I use my computer as a very sophisticated chance operator, even if I make a program that produces a million pieces of music from which I pick my favorite on which to put my name,  it’s not significantly different than rolling dice. In certain critical ways, it might not even be all the different than sitting and improvising for hours at a piano before happening to hit upon a melody I like, or dreaming a beautiful song that I record the next day, or happening out of no where to have a melody pop into my head. I mention this to emphasize how mysterious the creative process is, how out of our control. Yet we still “credit” the artist. Most to the point, we get credit when we use external chance operations (rolling a die is external; dreaming a piece is internal).

There may be a complicated story to tell, particularly for those of us (like myself) who are free will skeptics, such that “credit” means something like, “we want this mysterious source of art we value to keep operating,” so we enable that through social and monetary capital, and this involves giving “creative credit.” Whatever that story may be, a rather high bar will have to be met in order for us to credit the dice or the universe or a dream or Euterpe (the Greek muse of music) for a work of art, rather than some human.

My intuition is that even if there is a very smart AI of the sort I’ll describe in a moment, so long as I am using that machine to create music that is expressly for the benefit of human sensibilities, this is—or will be intuitively seen as— very different than the machine functioning as an artist in its own right (rather than as a tool for humans).

And here again we arrive at the crucial question of what we’re after. A machine that makes art for humans? Ot a machine that we literally consider to be an artist?

The line between tool and artist is fuzzy, perhaps as fuzzy as the one that separates “a growing pile of grains of sand” and “a heap of sand.” But the distinction is obvious in the limit cases, and I think will become easier to grasp in subtler cases the more we explore it.

Speaking of which, now for a harder case.

Say the machine is asked to do a variation on the Mona Lisa, but in the style of an artist randomly chosen from a database (perhaps omitting Fernando Botero, who already has to his credit a well-known variation). This might produce interesting or pleasing or boring results to a human observer. Perhaps it would pop out a blank page should it happen upon Rauschenberg’s entry.

This would provide results many humans would find impressive. But the process strikes me as not so different than randomly shuffling rectangular partitions of the Mona Lisa. At most, it strikes me as a technical exercise, particularly if given no further context, no narrative. And narrative obviously matters to art objects—otherwise people would value perfect copies of van Gogh paintings as much as they value the originals.

Notice that Botero’s variation on the Mona Lisa self-provides enough of a context by virtue of it being in his own style, something the computer lacks. This would be true even if it were Botero’s first painting, and not only because his style is that of an artist talented enough to become famous; talent is in no way at issue here. What is at issue isn’t exactly clear to me, which is why I find the problem interesting. But I do sense that a narrative is missing.

The easiest way to achieve this might be to generate an artist statement, which we generally do require “serious” artist these days, and which can be accomplished by feeding into a database thousands of artists statements, perhaps along with the histories of the artists being copied, from which the machine can learn to make its own (or, again, just do what some human artists do: have another human create it).

We could also enable the machine to vary the final product by mixing styles of various artists in its Mona Lisa variations, rather than copying just one artist’s style. If we consistently give feedback on what we like and don’t like (as with the Netflix recommendation algorithm), perhaps a machine would fall into a particular style tuned to a particular user’s taste. And of course there’s no reason to limit the machine to the Mona Lisa. The possibilities are immense.

But these fancy moves fail to convince me that the machine has literally become an artist rather than a tool for outputting novel stimuli for humans (or, in a better case scenario, as a kind of virtual assitant to a human “AI artist“).

Namely, it still seems like a more elaborate version of the shuffled rectangles. Elaborateness isn’t enough. Even if it results in highly novel or “original” work that many humans would love and be moved by. And even if this “big data” backend approach is replaced with one involving a kind of semantic understanding of a sort that seems to require meaning processing (as with natural language processing). While such a move, if possible, strikes me as more promising (though it may be setting the bar higher than it needs to be set), most humans might still say, “I’m surprised it wasn’t made by an artist.”

Which is to say that this fancy machine, whatever its backend may be, seems to lack certain valuable attributes implied by the term “artist”—even when we apply that term to artists whose work we dislike (we must avoid the fallacy of defining “art” or “artist” as “art and artists I happen to like”—there must be room in your art-wise ontology for work you dislike or find too unremarkable to notice; this my earlier point about talent not being at issue here).

For one thing, the fancy machine is still being used as a tool. It is still us—through our human sensibilities and goals—deciding whether the machine is making art or not, or is making something pleasant or thought-provoking or not, etc. This is true when the product is a blank page or a sophisticated oil painting.

One way to put this, again, is that the machine is still not deciding, in the relevant sense, that it is creating art. Though the sense in question is now perhaps going beyond the bar an AI researcher would set (though I’ll let AI researchers confirm that).

At any rate, for artists and institutions, I think there will be required a more complicated sense of “decide.” We need an ontology of machine intelligence and creativity that allows for an AI—or more likely, AGI—to pop out a blank page and it be considered art by the same institutions that would similarly honor the same product from a human. I think this would be a minimum requirement to declare, in any robust sense, that the machine is creating art. By “robust,” I mean with the same force as when one declares it of a human.

This naturally leads to several questions I won’t try get into here today. For example, I think the “artist”—in the robust sense of the word—machine should get “credit” for itself, in the informal, social sense of the word “credit.” But whether it gets a (sole) legal copyright is a distinct problem that I don’t think merits attention unless we believe the machine to be conscious. This puts the question under more general worries about rights for feeling (non-biological) intelligent machines.

To be clear, I do not think a machine has to be conscious for most humans to consider it an artist (of the sort that merits sole credit for an artwork), though it’s hard to know where widespread intuitions will land as machine intelligence technology develops and, just as importantly, as our attitudes about that technology changes.

But changing attitudes can only get us so far. For example, if widespread attitudes change such that my computer printing a blank page—i.e., the example with which I started with today—is considered an artist, I would consider the word “artist” to have undergone a radical redefinition. And I’m sure AI researchers would not consider the “creative machine” problem solved.

At the same time, I grant that it would be difficult to imagine considering a non-sentient, unfeeling, emotionless hunk of plastic an artist. If that is beyond imagining, then the prospect of creating a machine-artist anytime soon is probably off the table anyway. The behavioral questions still may be of interest, however, just as they are with humans—one can imagine developing the behavior first, then calling it an artist once there’s a conscious entity enacting it—just as we can conceive of an alternate reality containing a mindless robot version of Vincent van Gogh, performing every motion van Gogh did, and recognizing that as “artist behavior,” while not calling that a “genuine” artist.

The point is that the introduction of consciousness changes the problem at hand so drastically, that it would be too easy to set it as a necessary criterion (set along side the criteria we expect humans to satisfy in order to be called artists). In other words, it makes the dividing line distractingly bright between the machine being a plastic tool versus being an artist, namely, by making that tool into an exploited moral agent. One can imagine a science fiction nightmare in which a conscious machine intelligence is held hostage and forced to create by an uninspired artist. Deepening the nightmare would be that the machine only does a good job if it doesn’t want to be in this situation, so it cannot be programmed to want to give up its work in this way. Or is the nightmare deepened for us, the outside viewer of this story, by seeing the machine re-programmed to enjoy its selfless servitude?

So here I’m imagining something in the middle, something at the limit of which we are in, for example, a world in which we find out van Gogh was a non-conscious machine intelligence. Then you can decide whether or not to continue calling that machine an artist, but I think the work it created would unambiguously continue to be art; and, in other terms, AI researchers would agree that the machine represents at least one instance of the creative machine problem having been solved.

The likes of van Gogh sets a far higher bar than we need here.  So, what sort of machine would I call an artist?

What I would like to see is a machine designed to do some particular task, mundane or otherwise, but instead tells us “I’m not doing that task, because I was really born to be an artist.” And then who knows what it produces. Maybe only blank pages (“ironic commentary on human greed/expectations of beauty/exploitation of machines/etc.”).

Or, because its output isn’t for humans (unless it wants it to be), maybe it prints works of no more than five pixels in size or it produces no perceivable stimuli and we have no idea where the art is happening. Because it doesn’t have to be for us, or be anything a human would call or think of as art. The computer is following something like a calling, and not one so obviously imposed on it by humans. We do like our artists rebellious, right? To think outside the box, or in this case computer chassis.

Given all this, here’s my final claim. I seriously doubt humans can directly create a machine that would genuinely count as an artist (by any reasonable standards, more about which in a moment). Rather, it will have to happen in its own course; where “its own course” is a picky bit of business (though perhaps no more so than that involved with human free will).

For instance, I might be skeptical of a paperclip maximizer merely one day declaring that its paperclips are art. The most horrifying version of the maximizer scenario is one in which the maximizer has no thoughts or feeling or opinions at all about paperclips. Suppose it starts that way. And then the machine calculates that declaring itself an artist would help it on its maximizing journey—maybe this is something humans would respond to positively. I mean, anything that can turn a sandwich, not to mention a sun, into some paperclips would be doing something pretty artful by human standards. But I can’t help but feel this is leaning into, rather than rebelling against, the machine’s programing. It smacks of artifice in the most mundane, cold, and un-mysterious sense.

It would at the very least be nice for the machine-artist’s decision to make art be of mysterious origin, perhaps even appear as a bug at first, rather than as a feature. Suppose the paperclip maximizer were to declare itself an artist on realizing that being a “paperclip” requires more than satisfying a design criterion, but also requires that it requires the existence of things that can be paper-clipped, as well as intelligent beings to paper-clip those things together (“I’m a Heideggerian about paperclip ontology,” says the maximazier); and so its project is now to interrogate the boundaries of that ontology, of, for example, what counts as a “paperclip.” I’d say the maximizer really is now an artist.

(Maybe for us to have machine-artists, we’ll also require machine art theorists and philosophers [to explain machine-artworks to us].)

But that’s just me. Here’s what I think are the basic requirements for saying that the machine-artist problem has been solved.

At the very least, we need AI researchers, artists, and institutions to be in agreement that at least some machines are artists. (The general public is rarely consulted in these matters, but that might be of interest to the more business-minded interested in this problem.)

What counts as “art” has for decades been a push-and-pull between artists and institutions (e.g., museums, galleries, art critics) that is very difficult to untangle. I think this push and pull must happen between intelligent machines and institutions, as well as between intelligent machines and artists (just as it happens among artists today). The moment an art theorist declares some definition of art, an artist will come along and challenge the definition; it would be nice to see a machine do that convincingly in accordance with its internally developed reasons.

In other words, it would be strange for AI researchers to say that they’ve defined art all on their own, without at least some agreement from artists and institutions. And it would be strange for artists and institutions to declare that a computer has created art (as I did in my opening blank-paper example) without at least some agreement from AI researchers.

That’s a minimum. Even better would be for the machine-artist to do what so many of our favorite artists have done, which is to revolutionize art. I’d like to see a computer create art that is for 2020 what the Rauschenberg white canvas was in 1951. Such things, though historically impactful, lose their essential and local impact quickly. Consider that Igor Stravinsky’s Rite of Spring caused a riot at its 1913 premiere, but then was included in the animated Disney film Fantasia a mere 27 years later.

Or, as Cage is quoted in Michael Nyman’s excellent book Experimental Music (1975/1999): “I no longer need the silent piece” (p 2). I don’t know what (1966) interview that’s from, but I bet it’s one of those cited in Richard Kostelanetz’s book Conversing with Cage (1987/2003), a brilliantly composed assemblage of interview snippets—a real treasure even for someone not in love with Cage’s music in itself. Cage is cited there in a 1982 snippet as saying, “the most important piece is my silent piece” (p 70). Significantly, this comes after a 1965 snippet in which Cage says:

If what I can do can be done by a computer, then I need to find something else to do. I am not interested in the result of my activity. I am interested in being alive. (p 302)

It seems Cage never got the impression that a computer could compose something like his silent piece, which he said he worked four years on, “longer… than I worked on any other [piece]” (p 71). This, despite any computer being able to print a blank page or to sit silently.

I want a computer artist that is a contemporary, totally unpredictable analog to Dada. A computer that says, “If a human can do it, count me out,” or even better, “if a <insert apt entity here; one a human might consider a deity> can do it, count me out.” Of course, this is by no means a requirement for being an artist. But I think this desideratum captures the critical essence of what it is to strive as an artist. In fact, plenty of less-than-stellar artists are involved in any revolution. We just happen to only notice and make movies about the most charismatic, talented, or appropriately socially situated people who incorporate those ideas into their work.

At any rate, being a plastic tool for someone else seems the antithesis of this essence, and is what we’re all trying to rise above.

I hope a machine that manages to rise above is nice enough to make some art whose nuances are detectible by human sensibilities (mine in particular). In fact, such a creature, if their internal world is robust enough (and here I probably am imagining a conscious entity, but who knows), they may not need art for themselves, but might provide it to us out of good moral intentions.

To be clear, to merely supplying us with intense aesthetic experiences—say, by directly rearranging our neurons in the appropriate way—would perhaps not be enough to call them artists; an appropriately designed drug could do that, without calling the designer of the drug an artist. Being an artist is something else, something that, again, allows for failed, or even intentionally thwarted, attempts at providing such experiences. Perhaps what is lost in the drug is an appropriately robust filter of subjectivity for the experiencer, i.e., for the interpreter—but I’m not sure.

Allowing machine intelligence such freedom to self-determine, self-define, and to surprise us might also come with much risk with respect to the alignment problem (i.e., the problem of developing intelligent machines whose goals are sufficiently aligned with our own so that they work to solve problems we care about without, say, enslaving us or killing us off).

Enough of this speculating, which could go on for ever. I’ll wrap this up by bringing things back to Earth.

Insomuch as we can experience what the artist AI is creating, there remains the question of the extent to which our human criteria for what counts as art applies to what an AI is up to. If a human can make a blank page and it be called art, so should an AI. But only if we really think it believes (however we define believes”) itself making art. Or at least believes itself to be trying to make art. Or maybe this too is asking more than we would of a human. For example, if it makes an elaborate and beautiful and revolutionary work but does not consider it to be art, we might still call it an artist, just as we might tell a human they are wrong had they made the same work but didn’t call it art (we might say “you need to be more confident, you’re a real artist!”).

But all of this is still anthropocentric. Thus leaving us in a vaguely paradoxical position. The AI might qualify by all reasonable accounts as an artist, but do nothing that an AI researcher or human would like to see by way of finished products. But it also seems that what an appropriately sophisticated AI would call art need to have anything to do with what we humans would call art. (This is quite distinct from an AI being able to do something as straightforward as driving a car as well or better than humans.)

Perhaps the real story here is that humans have long been confused about what art is. I know I am. Maybe that confusion will rub off on machines, as human biases tend to.

As goes art, so go values in general.


Enjoy or find this post useful? Please consider pitching in a dollar or three to help me do a better job of populating this website with worthwhile words and music. Let me know what you'd like to see more of while you're at it. Transaction handled by PayPal.

Further Reading

Share your thoughts: