
Artist Trevor Paglen and the AI researcher Kate Crawford categorized by their own project (image courtesy ImageNet Roulette)
ImageNet will remove 600,000 images of people stored on its database after an art project exposed racial bias in the program’s artificial intelligence system.
Created in 2009 by researchers at Princeton and Stanford, the online image database has been widely used by machine learning projects. The program has pulled more than 14 million images from across the web, which have been categorized by Amazon Mechanical Turk workers — a crowdsourcing platform through which people can earn money performing small tasks for third parties. According to the results of an online project by AI researcher Kate Crawford and artist Trevor Paglen, prejudices in that labor pool appear to have biased the machine learning data.
Training Humans — an exhibition that opened last week at the Prada Foundation in Milan — unveiled the duo’s findings to the public, but part of their experiment also lives online at ImageNet Roulette, a website where users can upload their own photographs to see how the database might categorize them. (Crawford and Paglen have also released “Excavating AI,” an article explaining their research.) The application will remain open until September 27, when its creators will take it offline; in the meantime, ImageNet Roulette has gone viral on social media because of its spurious, and often cringeworthy, results.
For example, the program defined one white woman as a “stunner, looker, mantrap,” and “dish,” describing her as “a very attractive or seductive looking woman.” Many people of color have noted an obvious racist bias to their results. Jamal Jordan, a journalist at the New York Times, explained on Twitter that each of his uploaded photographs returned tags like “Black, Black African, Negroid, or Negro.” And when one user uploaded an image of the Democratic presidential candidates Andrew Yang and Joe Biden, Yang who is Asian American, was tagged as “Buddhist” (he is not) while Biden was simply labeled as “grinner.”
In recent months, researchers have explored how biases against women and people of color manifest in facial recognition services offered by companies like Amazon, Microsoft, and IBM. Critics worry that this technology, which is increasingly sold to state and federal law enforcement agencies, might encourage police overreach as a crime-fighting tool. There are also concerned that such tools might unconstitutionally violate a person’s right to privacy under the Fourth Amendment.
“This exhibition shows how these images are part of a long tradition of capturing people’s images without their consent, in order to classify, segment, and often stereotype them in ways that evokes colonial projects of the past,” Paglen told the Art Newspaper.
For the artist, ImageNet’s problems are inherent to any kind of classification system. If AI learns from humans, the rationale goes, then it will inherent all the same biases that humans have. Training Humans simply exposes how technology’s air of objectivity is more façade than reality.
The problem with stereotypes is that they tend to be true.
The problem with stereotypes is that they are often NOT true and therefore cause people to be judged unfairly on circumstances out of their control. Not to mention that the “truth” behind many of these stereotypes stems from generations of societal inequality. For example, until recently you could argue that there were no great female artists… which may on the surface appeared to have been true. But think a little bit more critically (if you are capable) and you’ll see that women and people of color were excluded from the institutes and confined to the margins of art history. Many great artworks by these marginalized groups were misattributed to men, including, most likely, the works of Shakespeare. Or they were never brought to the public eye because the artist lacked connections to the old boys’ club. Above all else, people who are stereotyped negatively typically have not been fortunate enough to have the same privileges as those who are not.
More often than not, however, the stereotype is true. That’s basically what “tends to be true” means. The causal nature of the stereotype, while related, is another matter but I’m pretty sure you get that wrong as well.
It only “tends to be true” to those who tend not to do any research to justify their simplistic beliefs. Stereotypes are created by selectively assembling patterns to reinforce already held prejudices. But I’m pretty sure you’re used to getting subtle distinctions like that wrong as well.
“Stereotypes are created by assembling patterns”.
You agreed with mark and then proceeded to argue against the point you just agreed to, is this was cognitive dissonance looks like?
Well first, let’s learn how to quote. Wow!’s comment said “SELECTIVELY assembling patterns” (emphasis mine). So, why did you omit that word?
I was disappointed the article gave no objective definition or guidance in determining what to censor. Is it the image or the descriptive meta-data?
In the Current Year, Artificial Intelligence must be augmented by Natural Stupidity.
So, yet again, reality proves itself politically incorrect, and instead I’m updating there worldview in line with the truth, fauxgressive radicals simply choose to whine that “even artificial intelligence is racist!” and shut it down, rather than change their bad, wrong opinions. Absolute ignorance and delusion.
Huh?
Misspellings, poor sentence structure, vague terminology, repeated ad hominem fallacies, over-generalizations, claims without evidence. Has education in this country fallen so low?
Whatever right-wing group is trolling Hyperallergic should recruit less linguistically-challenged individuals to rant out their ideology.
“Complains about ad hominem, uses it”
must be nice having exceptional grammar but no capacity to understand averages/statistics. Not that I’m surprised, it’s pretty commonplace for simpletons (especially from the coastal areas) to create a false sense of intellect through finely tuned vocab, no ones impressed.
As we know no interpretation of a photograph of anything is not culturally biased and therefor political.
All these databases are clearly very biased towards the culture of United States of America and thus Western, first world etc etc.
Think how different their output might be if dominated by a different culture. And the controlling possibilities of databases constructed with overt state political involvement.
PS, even the ‘captcha’ to log in is culturally biased to the USA!!
Wrong step, imho. They should remove racial biased tagging, not the images. Deep Learning Models need more diversity, otherwise the systems will become evenmore racial biased. At the end of the story, it’s about people, who tag images with racially attitude – it’s the old never-ending human factor issue.
And with evolution of AI we need also clear recognition between a cat and a human, for example. Which race this human belong to doesn’t matter at all. This is what our xenophob and chauvinist society doesn’t get: race, nationality, etc. doesn’t matter at all.
The Art Project “ImageNet Roulette” was a very important way for visualizing social issues and implications. AI is neutral. It’s up to people what they will train the systems with.