In a blog post on Monday, Facebook described its first measures to combat the spread of “deepfakes,” videos that have been manipulated using artificial intelligence or machine learning to depict situations that never happened. The company announced it will remove any content that has been edited or synthesized in misleading ways, beyond adjustments for audio quality and clarity, where an average user would not be able to discern alterations. The post also said Facebook will take steps to investigate and expose the videos’ creators.
Critics of the measures, however, are concerned that Facebook’s policy does not ban AI-manipulated media in its totality and allows for significant loopholes; for example, it does not extend to “content that is parody or satire” or to videos “edited solely to omit or change the order of words.” (Content could still be taken down, labeled as false, or have its circulation limited if it violates Facebook’s Community Standards, which include policies for hate speech and voter suppression, or does not pass reviews by the company’s third-party fact-checkers.)
Others say that by concentrating on deepfakes, Facebook is only addressing the most sophisticated manipulations and not doing enough to tackle “shallowfakes,” media that may be crudely edited, but can be just as deceptive. Last year, a video widely circulating on social media that appeared to show House Speaker Nancy Pelosi slurring her words was not taken down by the company, causing outrage among those who view Facebook’s efforts to fight disinformation as insufficient. The new measures announced on Monday are unlikely to affect doctored videos like the so-called “Drunk Pelosi” clip, because the editing technologies employed to slow down the speed of the Speaker’s voice are considered simple and low-tech in comparison to those of the presumably more pernicious deepfakes.
“Facebook wants you to think the problem is video-editing technology, but the real problem is Facebook’s refusal to stop the spread of disinformation,” said Drew Hammill, a spokesman for Pelosi, in a tweet on Tuesday, January 7.
The blog post explains Facebook’s decision to flag rather than remove misleading content as strategic: “If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labelling them as false, we’re providing people with important information and context.”
The text was authored by Monika Bickert, Vice President for Global Policy Management at Facebook, who has been called to testify before the House Committee on Energy and Commerce today, January 8, in the hearing “Americans at Risk: Manipulation and Deception in the Digital Age.”
Although Bickert does not reference election interference explicitly in the post, the new policy appears to be part of Facebook’s wider efforts to stop the spread of false information in advance of the 2020 elections as it continues to grapple with its complicity in 2016. On Tuesday night, Facebook executive Andrew Bosworth posted a memo that he had meant for employees only but was leaked to the New York Times. “So was Facebook responsible for Donald Trump getting elected? I think the answer is yes, but not for the reasons anyone thinks,” wrote Bosworth. “He didn’t get elected because of Russia or misinformation or Cambridge Analytica. He got elected because he ran the single best digital ad campaign I’ve ever seen from any advertiser. Period.”
As arts communities around the world experience a time of challenge and change, accessible, independent reporting on these developments is more important than ever.
Please consider supporting our journalism, and help keep our independent reporting free and accessible to all.