Tech & Terrorism: Facebook Puts Onus On Users to Identify Extremism

“(Social media) Platforms have learned that divisive content attracts the highest number of users and as such, the real power lies with these recommendation algorithms,” explained Dr. Hany Farid from UC Berkeley, regarding the role algorithmic amplification plays in the proliferation of harmful content. “Algorithmic amplification is the root cause of the unprecedented dissemination of hate speech, misinformation, conspiracy theories, and harmful content online.”

In early July, Facebook began testing a pilot program asking some users on its platform if they had been “exposed to harmful extremist content” or “concerned that someone you know is becoming an extremist.”

Called the “Redirect Initiative,” pop-up alerts redirect users to a support page if they choose to proceed.

The program marks Facebook’s latest effort to fend off more than a decade of criticism for the misuse of its platform, particularly after the perpetrator of the Christchurch mosque shootings livestreamed his attack on Facebook Live.

(The struggles of Facebook Inc. and other social media platforms to control content was demonstrated yet again as video of the New Zealand mosque shootings streamed live and remained available on the services hours after the attacks. Bloomberg’s Gerrit De Vynck reports on “Bloomberg Markets.” Courtesy Bloomberg Markets and Finance and YouTube. Posted on Mar 15, 2019.)

Criticisms also began to build when far-right groups were found to have promoted violence during the 2020 U.S. presidential election using Facebook groups and pages.

“The Redirect Initiative is Facebook’s latest half measure to tackle extremism on its platform in which users are asked to do the policing instead of the companies themselves,” said Counter Extremism Project (CEP) Executive Director David Ibsen.

“By putting the onus on users, Facebook is deflecting from its responsibility to be more proactive about removing offending content.”

David Ibsen serves as Executive Director for the Counter Extremism Project (CEP), a not-for-profit, non-partisan, international policy organization formed to combat the growing threat from extremist ideologies.
David Ibsen serves as Executive Director for the Counter Extremism Project (CEP), a not-for-profit, non-partisan, international policy organization formed to combat the growing threat from extremist ideologies.

“Moreover, Facebook’s initiative ignores a crucial root cause for the spread of extremist content—proprietary algorithms that have a perverse incentive amplify divisive and controversial content to keep users on their sites and generate more revenue for the company.”

In June CEP Senior Advisor Alexander Ritzmann published a policy brief on the European Union’s Digital Services Act (DSA), titled “Notice And (NO) Action”: Lessons (Not) Learned From Testing The Content Moderation Systems Of Very Large Social Media Platforms, which found that “notice and action” systems do not work as intended.

“Notice and action” relies on users to first notify the platform of illegal content, and the platform will then determine if it should be removed.

(The shootings in New Zealand were captured in real time by the gunman himself. It was live-streamed with his manifesto also posted on social media. Courtesy of CGTN and YouTube. Posted on Mar 16, 2019.)

Based on six independent monitoring reports, Ritzmann found that the overall average takedown rate of illegal content through user notices by large social mediate platforms was a mere 42 percent.

Despite these alarming findings, however, the draft DSA (Article 14) favors the “notice and action” mechanism as the main content moderation system.

This system unrealistically expects the 400,000,000 Internet users in the EU first to be exposed to illegal and possibly harmful content and then to notify the platforms about it.

Hany Farid, a professor of computer science at UC Berkeley

In a recent webinar, CEP Senior Advisor Dr. Hany Farid highlighted the consequences with the promotion of misinformation and divisive content on the Internet through algorithmic amplification by major tech platforms.

Regarding the role algorithmic amplification plays in the proliferation of harmful content, Dr. Farid stated, “Algorithmic amplification is the root cause of the unprecedented dissemination of hate speech, misinformation, conspiracy theories, and harmful content online.”

“Platforms have learned that divisive content attracts the highest number of users and as such, the real power lies with these recommendation algorithms.”

(On June 30, 2021, CEP hosted the first in a series of webinars with CEP Senior Advisor and UC Berkeley Professor Dr. Hany Farid on How Algorithmic Amplification Pushes Users Toward Divisive Content. Courtesy of Counter Extremism Project and YouTube. Posted on Jun 30, 2021.)

To Learn More, please visit counterextremism.com.

To view CEP’s report, “Notice And (NO) Action”: Lessons (Not) Learned From Testing The Content Moderation Systems Of Very Large Social Media Platforms, here.

AST strives to meet a 3 STAR trustworthiness rating, based on the following criteria:

  • Provides named sources
  • Reported by more than one notable outlet
  • Includes supporting video, direct statements, or photos

Subscribe to the AST Daily News Alert Here.