I took this TED Talk - How computers understand pictures from Youtube and split it into frames. I then took the frames and ran them through Google’s Im2Txt algorithm, to generate captions. I took this text and used it to create a supercut of the original video.

This experiment looks at techniques for producing video aesthetics that reflect the understanding of AI systems.

Here is an excerpt from the video: