Last year, a group of German computer scientists made waves by demonstrating a new computer algorithm that could transform any digital still image into artwork mimicking the painterly styles of masters like Vincent van Gogh, Pablo Picasso, and Edvard Munch. Though an impressive feat, applying the same technique to moving images seemed outrageous at the time. But now, another group of researchers have figured it out, quickly and seamlessly producing moving digital masterpieces, Carl Engelking writes for Discover.
In a video demonstration, the programmers show off their algorithm’s artistic abilities by transforming scenes from movies and television shows like Ice Age and Miss Marple into painting-like animations with the click of a mouse. But developing the algorithm was no small feat.
To create such a detailed transformation, computer scientist Leon Gatys and his colleagues at the University of Tübingen developed a deep-learning algorithm that runs off an artificial neural network. By mimicking the ways that neurons in the human brain make connections, these machine learning systems can perform much more complicated tasks than any old laptop.
Here’s how it works: when you’re looking at a picture of a painting or watching a movie on your laptop, you’re witnessing your computer decode the information in a file and present it in the proper manner. But when these images are processed through a neural network, the computer is able to take the many different layers of information contained in these files and pick them apart piece by piece.
For example, one layer might contain the information for the basic colors in van Gogh’s Starry Night, while the next adds a little more detail and texture, and so on, according to the MIT Technology Review. The system can then alter each different layers individually before putting them back together to create a whole new image.
“We can manipulate both representations independently to produce new, perceptually meaningful images.” Gatys wrote in a study published to the prepress arXiv server.
By applying this system of layer-based learning to paintings by Picasso and van Gogh, to name a few, the researchers were able to develop an algorithm that “taught” the computer to interpret all this information in a way that separates the content of a painting from its style. Once it understood how van Gogh used brushstrokes and color, it could then apply that style like a Photoshop filter to an image and effectively recreate it in his iconic style, Matt McFarland wrote for the Washington Post. But applying this technique to video presented a whole new set of problems.
“In the past, manually re-drawing an image in a certain artistic style required a professional artist and a long time,” Manuel Ruder and his team from the University of Freiburg write in their new study, also published on arXiv. “Doing this for a video sequence single-handed was beyond imagination.”
When Ruder and his colleagues first tried applying the algorithm to videos, the computer churned out gobbledygook. Eventually, they realized that the program was treating each frame of the video as a separate still image, which caused the video to flicker erratically. To get past this issue, the researchers put constraints on the algorithm that kept the computer from deviating too much between frames, Engelking writes. That allowed the program to settle down and apply a consistent style across the entire video.
The algorithm isn’t perfect and often has trouble handling larger and faster motion. However, this still represents an important step forward in the ways computers can render and alter video. While it is in its early stages, future algorithms might be able to apply this effect to videos taken through a smartphone app, or even be render virtual reality versions of your favorite paintings, the MIT Technology Review reports.
The idea of boiling down an artist’s style to a set of data points may rankle some people, it also opens the doors to all new kinds of art never before believed possible.