This database collects information about games, art and narratives that use or represent machine vision technologies. It is currently (June 2021) nearing completion, but we are still fixing details and haven't yet completed the design. This is part of the ERC project Machine Vision in Everyday Life: Playful Interactions with Visual Technologies in Digital Art, Games, Narratives and Social Media
|Title||Year||Country||Creator||Type of work||Machine vision technologies referenced|
|Klara and the Sun||United Kingdom||Kazuo Ishiguro||Narrative, Novel||AI (General Purpose Artificial Intelligence), Object recognition|
|AI Facial Profiling, Levels of Paranoia||Switzerland||Marta Revuelta, Laurent Weingart||Art, Installation art||Machine learning|
|Deep Down Tidal||French Guiana||Tabita Rezaire||Art, Video art||Cameraphone, Drones (UAV), Object recognition|
|The Other Nefertiti||Germany, Egypt, Iraq||Nora Al-Badri, Jan Nikolai Nelles||Art|
|INTERHUMAN||Canada, Argentina, Mexico||Isabella Salas, Nora Isabel Golic||Art, Video art|
Machine Vision Situations
|Situation||Brief description||Aesthetic Characteristics|
|Klara and the Sun (Klara sees the world as partitioned)||
When Klara experiences a situation as ambivalent or otherwise has trouble analysing what she sees, her visual input becomes "partitioned", divided into separate boxes. This is presumably a reference to the way image recognition software draws boxes around objects that have been identified. The partitioning is never explained, and Klara does not see it as odd, although she does appear relieved when the partitions disappear.
|AI Facial Profiling, Levels of Paranoia (classifying threat)||
The visitor (user) is scanned with a gun like device. A facial recognition software with a machine learning model trained on faces of gun users from youtube videos and selfies determine weather the person facial characteristics fit the pattern of either HIGH risk gun users or LOW risk selfies. The classification result is stamped on a document with an image of the visitors face and then sorted into one of two transparent boxes. The installation represents a security control situation familiar from e.g. airports. The recognition of threat is outsourced to assumable neutral machine.
|Metallic, Mechanical, Authoritative|
|Deep Down Tidal (bias image search)||
In the foreground a black woman is on her smartphone. In the background image searches on feet, dreadlocks and children appear in browsers. The searches are mainly showing feet of white people, or white people with dreadlocks and fair skinned children. In the background a voice in phone conversation style is stating: “This is racism”, “this is really not good”. “It is like they are not treating us equal”. “They are not treating us as a human being”. The sequence ends with the written text: “Google why u mad?” referencing to the bias of image search engines an example of “Electronic colonialism is a domination and control of digital technologies by the west to Maine and expand the hegemonic over the rest of the world” (stated in the video after the sequence). The sequence demonstrate the bias of search engines which excludes other worldviews when dominated by Western culture and values.
|Desktop-documentary, net.art, Collage|
|The Other Nefertiti (nefertiti hack)||
The artists make a 3D scan out of the Nefertiti bust without remission and publish the data for anyone to use. They create a 3D print of the model and return it to Egypt when institutions like Neues Museum Berlin hoard illegally looted and exported Egyptian heritage and their digital counterparts.
|Cultural heritage, 3D|
|INTERHUMAN (latent walks)||
A neural network is trained on multicultural portraits photographed by one of the creators. The generated images in form of a video piece depicted as "latent walks" (moving in the latent space) create a morphing effect in which the new "interhuman" face is in continuous flux celebrating the diversity of humanity and expressing layers of emotions.