Can Artificial Intelligence Create Art?

| By Editorial Staff

With so much global turmoil going on in the news, it’s interesting to note that young people everywhere are asking, “Why can’t we all just get along?” Well, as civilizations go we do get along for the most part, within our own societies. As we’ve discussed in a previous blog, we’re wired to think like our peers.

In fact, if everybody “got along” to the point that there were no alternative opinions, we might question our own humanity. We call such behavior “robotic.” Think of those agreeable ladies in The Stepford Wives.

Amazingly, what we look for today in the field of robotics is that spark of individualized thinking that could indicate a sign of true life. So if robots could express emotions, would they all be the same or would they argue amongst themselves? This is the question of the day: Is it possible to create a lovely, artistic, expressive brain out of a pile of algorithms?

Generally when sci-fi writers broach this topic, the robotic brain is the less argumentative type. Most of the robots reach one inescapable conclusion and act in accord. A prime example is the movie I, Robot, based on the collection of stories by Isaac Asimov. The villain of the story is the dominant machine, VIKI, who sees herself as the mother protector of humankind. She is guided in her logic by Asimov’s Three Laws of Robotics, which state:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. (Spoiler alert – VIKI apparently finds it logical to terminate some lives in order to protect the whole of humanity.)
  2. A robot must obey orders given by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Dr. Joseph Carr, a scientist and researcher in electrical engineering, has been following the progress of artificial intelligence (AI). He explains not only the situation Asimov predicted, but how it applies to us today. “In the original stories the robots come to rule humanity from the shadows as benevolent masterminds. It’s a bloodless coup that comes as robots are given, by humanity, more and more control over decisions like hiring and budgeting. It turns out well there because the first law ensures that they use their power to benefit people, like shunting a person into a job they’ll love…but think about how we are handing more and more of our lives over to algorithms like Facebook’s newsfeed. Those robots don’t have any laws ensuring that they work for our benefit, only the law of maximizing profits to their shareholders.”

Asimov’s laws are also central to the story-cum-blockbuster Bicentennial Man (and it’s noteworthy for this discussion that the first sign of humanity in the robot Andrew is that he creates a work of art). That movie was on the cusp of the 21st century. Today we’re on the latter half of our second decade, when mid-century futurists predicted we’d be mining Mercury and living on Mars. We’re impatient to know when we’re going to see “real” robots in action.

Sadly, the positronic mind Asimov imagined still hasn’t come to light, although his internal memory capacity concerns were dealt with handily by the advent of the neuromorphic chip. However, the good news is that many of you may already own a sentient robot; the ability to perceive things is manifested on a regular basis by the Roomba.

Maybe you’re thinking sentience isn’t the best criteria to define a real 21st century robot. Perhaps the question is whether it can be discerning. Can it decide between one thing and another? Can a robot write a book? Dr. Carr points out that a robot can certainly make a good sports journalist. “Since sports results are compiled from a list of stock phrases and statistics that are automatically filled in by a search engine, this task falls perfectly in line with the well-programmed robot’s skillset. If the point of the job is to simply tell us who won a game and provide a few vital statistics about it, the task is merely data entry…and robots excel at data entry.”

No, you say, the robot must craft phrases that are lovely and unique. Carr tells us, “Certainly there are processes for AI programs to output poetry, but these efforts have been less successful since the only processes developed at this point instruct the machine to take snippets of poems written by people and recombine them.” NPR provides examples of these alongside human-crafted poems and invites your judgment of their success.

Ok, this is nothing like the comparisons of human and robotic behavior Gene Roddenberry and his staff of writers gave us with Data in Star Trek: The Next Generation. The episode Inheritance is absolutely the most thought-provoking and, to further whet your expectations, shows how robots would attempt to play music in the most “human” way possible. We have to remind ourselves that the show offered mere ideas of how a robot might go about creating art. No robots were involved in the actual rendering.

A real life scenario does exist if you’re hungering for actual progress. Check out the composition performed last month using output from a Quantum computer, given as a response to feed from mezzo-soprano Juliette Pochin. The feed was recombined with her voice to create a live duet. Admittedly, the computer portion isn’t a creation requiring, dare we say, machinations on the part of the computer.

Fear not, however, because one failed effort provides a glimpse at what we might be looking for in a unique AI. As Wired magazine reports, computers have become “freakishly good at identifying what they’re looking at,” yet scientists were baffled by a 2015 experiment in which their AI failed to separate a number of random designs from defined images. These “cutting-edge deep neural networks” returned incorrect responses with a high degree of confidence, and the researchers are unable to determine why.

For example, when presented with a design of yellow and black bars, the AI returned the response, “School Bus.” Again and again, the machine associated a thing to a variety of designs rather than recognizing the lack of an association. The concern is that these failures call into question the future of facial identity algorithms.

Yet in these errors we find the very thing that could suggest a unique thought process. A true visionary like Salvador Dali might return the same response, or at least argue in favor of the robot. If the image evokes “school bus,” is the image not, in the mind of the beholder, that of a school bus? Are the designs in question actually a type of Rorschach Test reflecting the mind of the machine?

Even more promising is the fact that the machine found objects in pictures of pure static. What is it seeing? The researchers propose that perhaps a minute string of pixels triggered each object recognition, but doesn’t that make the response even more true from the perspective of the robot? Scientists call this a problem and an error, but if one robot shows a picture of static to another robot and they both see it as a lizard, who’s to say it isn’t a picture of a lizard?

These types of unexpected turns may lead us to the independent robot we’ve come to envision. Meanwhile, according to Dr. Carr, the most probable next step will be for a computer to create an Impressionist painting after receiving as input a series of Impressionist images, much like the poetry example above. (Within days after our discussion, Dr. Carr sent a link to an article that mentions exactly this type of effort, only instead of Impressionists the programmers used Rembrandt.)

Ok, maybe it’s a disappointing thought. If a college student were to present a robotic compilation of Monet lilies in an art class, it’d take some hard arguing to convince the teacher it was a unique creation. She would do better to create a Pollock-esque canvas of seemingly random splotches and call it a lizard.

Stay in touch & sign up for updates from Spellbound.