ImageNet will remove 600,000 images of people stored on its database after an art project exposed racial bias in the program’s artificial intelligence system.
Created in 2009 by researchers at Princeton and Stanford, the online image database has been widely used by machine learning projects. The program has pulled more than 14 million images from across the web, which have been categorized by Amazon Mechanical Turk workers — a crowdsourcing platform through which people can earn money performing small tasks for third parties. According to the results of an online project by AI researcher Kate Crawford and artist Trevor Paglen, prejudices in that labor pool appear to have biased the machine learning data.
Training Humans — an exhibition that opened last week at the Prada Foundation in Milan — unveiled the duo’s findings to the public, but part of their experiment also lives online at ImageNet Roulette, a website where users can upload their own photographs to see how the database might categorize them. (Crawford and Paglen have also released “Excavating AI,” an article explaining their research.) The application will remain open until September 27, when its creators will take it offline; in the meantime, ImageNet Roulette has gone viral on social media because of its spurious, and often cringeworthy, results.
For example, the program defined one white woman as a “stunner, looker, mantrap,” and “dish,” describing her as “a very attractive or seductive looking woman.” Many people of color have noted an obvious racist bias to their results. Jamal Jordan, a journalist at the New York Times, explained on Twitter that each of his uploaded photographs returned tags like “Black, Black African, Negroid, or Negro.” And when one user uploaded an image of the Democratic presidential candidates Andrew Yang and Joe Biden, Yang who is Asian American, was tagged as “Buddhist” (he is not) while Biden was simply labeled as “grinner.”
In recent months, researchers have explored how biases against women and people of color manifest in facial recognition services offered by companies like Amazon, Microsoft, and IBM. Critics worry that this technology, which is increasingly sold to state and federal law enforcement agencies, might encourage police overreach as a crime-fighting tool. There are also concerned that such tools might unconstitutionally violate a person’s right to privacy under the Fourth Amendment.
“This exhibition shows how these images are part of a long tradition of capturing people’s images without their consent, in order to classify, segment, and often stereotype them in ways that evokes colonial projects of the past,” Paglen told the Art Newspaper.
For the artist, ImageNet’s problems are inherent to any kind of classification system. If AI learns from humans, the rationale goes, then it will inherent all the same biases that humans have. Training Humans simply exposes how technology’s air of objectivity is more façade than reality.
This week, Patrisse Cullors speaks, reviewing John Richardson’s final Picasso book, the Met Museum snags a rare oil on copper by Nicolas Poussin, and much more.
Alexi Worth’s paintings demand a double take that allows viewers to look closer and begin dissembling the painting in order to understand what is being looked at.
Graduate students in the University of Denver’s Emergent Digital Practices program work on research with faculty who are engaged directly with their communities, both online and off.
Anastasia Pelias’s sculpture builds on this mythological legacy, suggesting we all have the ability to commune with a higher power and influence our futures.
Jack Spicer’s poetry can be deeply funny and playful but it has a consistent undercurrent of sadness.
Curated by Jill Kearney, this exhibition in Frenchtown, NJ amplifies stories both local and universal with work by Willie Cole, Sandra Ramos, sTo Len, and more.
Belinda Rathbone’s biography traces the sculptor’s embrace of kinetic mechanisms to his work in the Singer Sewing Machine factory.
It’s the first time in the country’s history that objects of this significance are offered for public sale.
The first lecture is on the relationship between early portrait photography and diverse notions of US identity during the Gilded Age. Register to attend on January 25.
Schwartz was at the forefront of computer-generated art before desktops or the kind of software that makes it commonplace today.
Curator La Tanya S. Autry shares a set of crucial questions she considers when curating images of anti-Black violence.
Crys Yin’s subject is grief, which, for all that takes place in public, is largely a private matter.