Lacework

Description (in English)

Lacework is a new work by Everest Pipkin that uses artificial neural networks to reinscribe the videos of MIT’s Moments in Time Dataset. Using algorithms that stretch time and add details to images, Pipkin creates a series of hallucinatory slow-motion vignettes from the videos of everyday actions that form the collection.

The Moments in Time Dataset was developed in 2018 to recognise and understand different actions in video by automated systems. It contains one million 3 second videos scraped websites like YouTube and Tumblr, each tagged with a single verb like asking, resting, snowing or praying.

Each of the 339 verb tags contains thousands of videos ranging from the very personal to the widely recognisable. For instance, the 'Drumming' tag includes a high school marching band, an excerpt of Animal from The Muppets, a performer in a subway station and a YouTube tutorial, among others. 'Flying' includes a view from the window of an airplane, a bee circling a flower, a satellite rotating above the earth, a flock of flamingos and a skydiver yelling something we cannot hear.

By manipulating the source videos of the dataset, Lacework presents a river of these moments, as captured in amber; flowing from one to another into a cascade of gradual, unfolding details.

Source: https://thephotographersgallery.org.uk/whats-on/digital-project/everest…

Situation machine vision is used in

Authored by

UUID
6e5a09fa-ec5b-4c91-92cc-af1c65f82a1e