Home Artificial Intelligence Facebook claims its AI reduced hate by 50% despite internal documents highlighting failures

Facebook claims its AI reduced hate by 50% despite internal documents highlighting failures

0
Facebook claims its AI reduced hate by 50% despite internal documents highlighting failures

[ad_1]

Damning reports about the ineffectiveness of Facebook’s AI in countering hate speech prompted the firm to publish a post to the contrary, but the company’s own internal documents highlight serious failures.

Facebook has had a particularly rough time as of late, with a series of Wall Street Journal reports in particular claiming the company knows that “its platforms are riddled with flaws that cause harm” and “despite congressional hearings, its own pledges and numerous media exposés, the company didn’t fix them”.

Some of the allegations include:

  • An algorithm change made Facebook an “angrier” place and CEO Mark Zuckerberg resisted suggested fixes because “they would lead people to interact with Facebook less”
  • Employees flag human traffickers, drug cartels, organ sellers, and more but the response is “inadequate or nothing at all”
  • Facebook’s tools were used to sow doubt about the severity of Covid-19’s threat and the safety of vaccines
  • The company’s own engineers have doubts about Facebook’s public claim that AI will clean up the platform.
  • Facebook knows Instagram is especially toxic for teen girls
  • A “secret elite” are exempt from the rules

The reports come predominantly from whistleblower Frances Haugen who grabbed “tens of thousands” of pages of documents from Facebook, plans to testify to Congress, and has filed at least eight SEC complaints claiming that Facebook lied to shareholders about its own products.

It makes you wonder whether former British Deputy PM Nick Clegg knew just how much he’d be taking on when he became Facebook’s VP for Global Affairs and Communications.

Over the weekend, Clegg released a blog post but instead chose to focus on Facebook’s plan to hire 10,000 Europeans to help build its vision for the metaverse—a suspiciously timed announcement that many believe was aimed to counter the negative news.

However, Facebook didn’t avoid the media reports. Guy Rosen, VP of Integrity at Facebook, also released a blog post over the weekend titled Hate Speech Prevalence Has Dropped by Almost 50% on Facebook.

According to Facebook’s post, hate speech prevalence has dropped 50 percent over the last three quarters:

When the company began reporting on hate speech metrics, just 23.6 percent of removed content was proactively detected by its systems. Facebook claims that number is now over 97 percent and there are now just five views of hate speech for every 10,000 content views on Facebook.

“Data pulled from leaked documents is being used to create a narrative that the technology we use to fight hate speech is inadequate and that we deliberately misrepresent our progress,” Rosen said. “This is not true.”

One of the reports found that Facebook’s AI couldn’t identify first-person shooting videos, racist rants, and couldn’t separate cockfighting from car crashes in one specific incident. Haugen claims the company only takes action on 3-5 percent of hate and 0.6 percent of violence and incitement content. 

In the latest exposé from the WSJ published on Sunday, Facebook employees told the outlet they don’t believe the company is capable of screening for offensive content. Employees claim that Facebook switched to largely using AI enforcement of the platform’s regulations around two years ago, which served to inflate the apparent success of its moderation tech in public statistics.

Clegg has called the WSJ’s reports “deliberate mischaracterisations” that use quotes from leaked material to create “a deliberately lop-sided view of the wider facts.”

Few people underestimate the challenge that a platform like Facebook has in catching hateful content and misinformation across billions of users – and doing so in a way that doesn’t suppress free speech – but the company doesn’t appear to be helping itself in overpromising what its AI systems can do and, reportedly, even willfully ignoring fixes to known problems over concerns they would reduce engagement.

(Photo by Prateek Katyal on Unsplash)

Find out more about Digital Transformation Week North America, taking place on 9-10 November 2021, a virtual event and conference exploring advanced DTX strategies for a ‘digital everything’ world.

Tags: ai, artificial intelligence, ethics, facebook, Frances Haugen, guy rosen, hate speech, mark zuckerberg, report, research, social media, whistleblower

[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here