Using deep neural nets, it is possible to change a photo or video to mimic the style of a piece of art. We have the original image, the style source image, and the pastiche. Here’s an example of J at Jack Block Park using the style of Rain Princess by Leonid Afremov. You can see the pastiche appears to be made of colorful oil strokes.
We run the original image through a neural net. I initialized the neural net with weights from a VGG19 model trained on ImageNet. By doing so, we already have a good basis for features important to object categorization and invariance to translation, rotation, etc.
The neural net’s loss function seeks to create a pastiche that minimizes content loss (the content of the original image) and the style loss (the style of the style source image). By adjusting weights in our loss function, we can change the importance of minimizing either content or style loss.
Here’s a picture of Cherry Creek Falls in the style of Vincent van Gogh’s The Starry Night. In the first pastiche, minimizing content loss is more important. In the second pastiche, minimizing style loss is more important.
And here’s a video of me floating in a pool. Even on a 10-second 240×134 video, the style transfer process was slow, because I trained a neural net on each frame. At 60fps, there are hundreds of frames, so hundreds of corresponding neural nets that were trained.