To create dynamically looping videos (aka video textures), we use SSD comparisons between proposed frame successors and actual frame successors in order to identify good transitions between temporal positions in a training video. We then reuse the content of the source video, probabilistically jumping between its frames in a way that (a) takes advantage of quality transitions and (b) avoids dead ends. We’ll continue to write out these frames until we’ve created a video of desired (yet theoretically arbitrary) length.
In each of the below examples, the first video is the training source and the second is the program output.
The following texture contains rare footage of a certain dark-furred lagomorph chewing on parsley.