"Can machines be creative?" This question is the target of a recent Google undertaking, dubbed Project Magenta, focused on bringing artificial intelligence into the art world.
Magenta and other creative AI endeavors draw on the power of deep neural networks, systems that allow computers to sort through large amounts of data, recognizing patterns, and eventually generating their own pictures, music and more. These networks had previously been put to artistic use by Google for its "DeepDream" project, which was designed to visualize how neural networks think. Researchers could feed the tool images, which it then reinterpreted into often abstract, and often trippy, works.
Last year, Google started Project Magenta to apply what it learned from these AI-created masterpieces to further push the limits of computer creativity in art, music, videos and more. Now, The New York Times' Cade Metz tuned into the software giant's recent projects to see (and hear) what's come of the endeavor.
Along with the announcement of Project Magenta last summer, Google released the neural network's first song. The Google team gave its algorithm four notes (C, C, G, G) to work with, and then let the machine compose a roughly 90-second song with a piano sound. The little ditty is upbeat, starting slow but picking up with a drum beat added behind it as it explores patterns using those four notes.
But now, Google programmers are using those networks to not only create new pieces of music, but new instruments. For example, a tool called NSynth, has analyzed hundreds of notes played by a variety of modern instruments, mapping out the features that make a guitar sound like a guitar, or a trumpet sound like a trumpet. Using these maps, users can then combine instrument characteristics to create brand new sound makers.
A more recent project from Google trained an algorithm with examples of classical piano music to create a tool that can compose its own music within the framework of classical piano techniques, reports Matthew Hutson for Science. While you won't find Performance RNN, as the algorithm is called, composing a symphony any time soon, it can create short original music phrasings that are "quite expressive," as programmers Ian Simon and Sageev Oore wrote last month on the Project Magenta blog. And another algorithm has been trained from Magenta's code to be able to respond to notes that people play with its own original snippets of music, in effect creating a "duet" with an AI.
Other Google algorithms have worked on edging more into the visual art world, reports Hutson. For example, the algorithm SketchRNN has analyzed thousands of examples of human drawings to teach a computer to create basic sketches of common shapes, such as chairs, cats and trucks.
Once these models have been "trained," writes Google researcher David Ha, the computer can analyze and recreate previously submitted drawings in original ways. It can even correct mistakes researchers added in to make the images appear more accurate, such as drawing a pig with four legs instead of five. Similar to the blended instruments of NSynth, artists can game these models by doing things like submitting drawings of chairs to a program that draws cats, creating blended sketches that lie somewhere between the shapes.
Some other projects haven't worked out just yet, Hutson reports, such as a tool to create new jokes. (They just weren't funny.)
Google aren't the only ones interested in artsy AI. As Metz notes, last year, researchers at Sony trained an neural network to compose new songs in the styles of existing artists—even creating an pop song that resembles a composition from the Beatles. Another neural network composed its own Christmas song when shown a picture of a Christmas tree.
Though some people are concerned that AI could replace us all, developers don't see these tools as ever supplanting human creativity, Hutson reports. But rather, these algorithms are tools that can help inspire and channel imagination into new creations.
Maybe one day, your muse could be a computer.