Trevor Paglen’s latest body of work, A Study of Invisible Images, is riddled with contradictions — starting, of course, with its title. (What defines an image if not its visuals?) The Berlin-based artist and geographer is popular for his investigations into surveillance apparatuses, which provide the perverse, voyeuristic satisfaction of watching the systems meant to watch you. In the past, that has taken the form of super long-distance photographs of drones appearing as black smudges against a moody sky and deep-sea images of the fiber-optic cables that make up the internet’s infrastructure. Those projects pushed physical boundaries to expose the discrepancies between what we culturally accept as out of sight (and out of mind) and the actual limits of our ability to see. But Paglen’s latest work goes further: it attempts to peer into the world of machine vision.
A Study of Invisible Images, which is showing at New York’s Metro Pictures until October 21, illuminates the ways that machines interpret photographs — that is, through “computer vision” and machine learning — as well as the fact that so many images today are made by machines, and for machines. He calls some of these “invisible images” because they can’t be seen by human eyes without visualizing software. As Paglen puts it in his statement: “Machines have learned to see. Without us.”
The exhibition consists of various examples of “invisible images.” These include visualizations of the data sets which train machines to identify subjects and objects in an image. One of those visualizations features what Paglen refers to as a “face-print” of post-colonial theorist Frantz Fanon. The ghostly portrait, which looks as if it emerged out of the ether of collective consciousness, is actually a representation of the data that makes Fanon’s features mathematically distinct from other faces. In another series, titled Hallucinations, oddly expressive prints resemble abstract expressionist paintings. Each is an amalgamation of a massive set of images that Paglen compiled to train a computer to recognize “irrational” concepts, such as the symbolism used in the interpretation of dreams.
Paglen urges us to understand the ways in which we are being watched. But on a more philosophical level, he is also urging us to unlearn to see like humans — to broaden the meaning of an image to include characteristics seemingly incompatible with our accepted understandings — as well as to stop assuming that the way we view images is what defines them. Consider this: When you turn off your iPhone, and all of your photos are reduced to data, do they remain images? Paglen proposes that the answer is yes.
But the actual “invisible images” that Paglen warns us about never appear in the show. Because, of course, they can’t. They can only be pointed to through surrogate manifestations tailored to our terms of seeing. And his descriptions do a similar trick: To say that machines “see” is at most an analogy filling in for a failure of language.
It is out of these subtle limits of translation that the richest implications of the show linger. Do humans own the ability to define what counts as an image? Can a machine have a gaze in the same way that a human does? Paglen makes the answers to these questions seem simpler than they are, likening machines to humans to evoke a sense of sci-fi drama. The danger with personifying machines is that it obscures the people and corporations who make, operate, and profit off of them. Anyone who reads Paglen’s annotated checklist for the show, which describes the technology behind each piece, could imagine the dystopian ways in which machines might be used. But the artworks actually work to quell those anxieties, infused with a very human romanticism that warms their emotionless utility, normalizes their presence in an art space, and ultimately obscures how they are actually used — and, more importantly, by whom.
Trevor Paglen: A Study of Invisible Images continues at Metro Pictures (519 West 24th Street, Chelsea, Manhattan) through October 21.