For better or for worse (probably a bit of both), the age of synthetic media is here. We’re getting to the point where artificial intelligence can create its own videos, images, and sounds clips.
Among the cool projects out there is Timecraft, created by MIT Computer Science & Artificial Intelligence Laboratory. Using new machine learning techniques, it can deduce how a painting was made (the patterns, the method, the number of brush strokes, the nuances, etc.) and recreate it in real time, producing a time-lapse video of the masterpiece coming to life.
It’s not perfect yet. Trained eyes (and even sometimes uninitiated ones) can often pick up on flaws. But it has so far been rated much higher than any other similar technology out there. When it was used to recreate paintings that already have (human-man) timelapse videos, it fooled 50% of the people who took an online survey. This new tech might just give us a glimpse into how the world’s art masterpieces were created.
How Does It Work?
Timecraft is made possible by a branch of computer science called machine learning. Basically, you feed a computer tons of input to help it learn and gain an “understanding” of the material it’s dealing with. And then, using all of the information is has absorbed, it can produce material.
So in the case of Timecraft, the inventors fed it over 200 art time-lapse videos featuring watercolor paintings in the making. The system analyzes all the input it sees. For example, it registers which movements produce which shapes, what happens when certain colors mix, or how long it takes to add a given detail. It’s then supposed to be able to look at a painting it has never seen before, and figure out the process that went into making it.
It has already shown that it has potential by recreating some lime-lapse videos. If the technology is completely successful down the line, it might be able to take the world’s great masterpieces and create videos showing accurately how they were made.
It’s not hard to see the potential benefits of Timecraft and other synthetic media tech. We could learn new art techniques and maybe even learn the elusive secrets of what makes good art. That could even give us fundamental insights into a cultural pillar of ours and maybe inform us further on the human condition. But is there a problem with synthetic media?
In April 2020, Jay-Z sued a tech firm for copyright infrigement because they used artificial intelligence to create a recital of Shakespeare’s Sonnets spoken in his voice. What if someone could produce a video of you robbing the store? How would you prove your innocence if the storekeeper was in on it? Clearly, this could be reality-altering technology, so we should try to make it work, but we need to also be careful.